Search Results: "dz"

16 January 2023

Freexian Collaborators: Monthly report about Debian Long Term Support, December 2022 (by Anton Gladky)

Like each month, have a look at the work funded by Freexian s Debian LTS offering.

Debian LTS contributors In December, 17 contributors have been paid to work on Debian LTS, their reports are available:
  • Abhijith PA did 3.0h (out of 0h assigned and 14.0h from previous period), thus carrying over 11.0h to the next month.
  • Anton Gladky did 8.0h (out of 6.0h assigned and 9.0h from previous period), thus carrying over 7.0h to the next month.
  • Ben Hutchings did 24.0h (out of 9.0h assigned and 15.0h from previous period).
  • Chris Lamb did 18.0h (out of 18.0h assigned).
  • Dominik George did 0.0h (out of 10.0h assigned and 14.0h from previous period), thus carrying over 24.0h to the next month.
  • Emilio Pozuelo Monfort did 8.0h in December, 8.0h in November (out of 1.5h assigned and 49.5h from previous period), thus carrying over 43.0h to the next month.
  • Enrico Zini did 0.0h (out of 0h assigned and 8.0h from previous period), thus carrying over 8.0h to the next month.
  • Guilhem Moulin did 17.5h (out of 20.0h assigned), thus carrying over 2.5h to the next month.
  • Helmut Grohne did 15.0h (out of 15.0h assigned, 2.5h were taken from the extra-budget and worked on).
  • Markus Koschany did 40.0h (out of 40.0h assigned).
  • Ola Lundqvist did 10.0h (out of 7.5h assigned and 8.5h from previous period), thus carrying over 6.0h to the next month.
  • Roberto C. S nchez did 24.5h (out of 20.25h assigned and 11.75h from previous period), thus carrying over 7.5h to the next month.
  • Stefano Rivera did 2.5h (out of 20.5h assigned and 14.5h from previous period), thus carrying over 32.5h to the next month.
  • Sylvain Beucler did 20.5h (out of 37.0h assigned and 22.0h from previous period), thus carrying over 38.5h to the next month.
  • Thorsten Alteholz did 10.0h (out of 14.0h assigned), thus carrying over 4.0h to the next month.
  • Tobias Frost did 16.0h (out of 16.0h assigned).
  • Utkarsh Gupta did 51.5h (out of 42.5h assigned and 9.0h from previous period).

Evolution of the situation In December, we have released 47 DLAs, closing 232 CVEs. In the same year, in total we released 394 DLAs, closing 1450 CVEs. We are constantly growing and seeking new contributors. If you are a Debian Developer and want to join the LTS team, please contact us.

Thanks to our sponsors Sponsors that joined recently are in bold.

8 December 2022

Reproducible Builds: Reproducible Builds in November 2022

Welcome to yet another report from the Reproducible Builds project, this time for November 2022. In all of these reports (which we have been publishing regularly since May 2015) we attempt to outline the most important things that we have been up to over the past month. As always, if you interested in contributing to the project, please visit our Contribute page on our website.

Reproducible Builds Summit 2022 Following-up from last month s report about our recent summit in Venice, Italy, a comprehensive report from the meeting has not been finalised yet watch this space! As a very small preview, however, we can link to several issues that were filed about the website during the summit (#38, #39, #40, #41, #42, #43, etc.) and collectively learned about Software Bill of Materials (SBOM) s and how .buildinfo files can be seen/used as SBOMs. And, no less importantly, the Reproducible Builds t-shirt design has been updated

Reproducible Builds at European Cyber Week 2022 During the European Cyber Week 2022, a Capture The Flag (CTF) cybersecurity challenge was created by Fr d ric Pierret on the subject of Reproducible Builds. The challenge consisted in a pedagogical sense based on how to make a software release reproducible. To progress through the challenge issues that affect the reproducibility of build (such as build path, timestamps, file ordering, etc.) were to be fixed in steps in order to get the final flag in order to win the challenge. At the end of the competition, five people succeeded in solving the challenge, all of whom were awarded with a shirt. Fr d ric Pierret intends to create similar challenge in the form of a how to in the Reproducible Builds documentation, but two of the 2022 winners are shown here:

On business adoption and use of reproducible builds Simon Butler announced on the rb-general mailing list that the Software Quality Journal published an article called On business adoption and use of reproducible builds for open and closed source software. This article is an interview-based study which focuses on the adoption and uses of Reproducible Builds in industry, with a focus on investigating the reasons why organisations might not have adopted them:
[ ] industry application of R-Bs appears limited, and we seek to understand whether awareness is low or if significant technical and business reasons prevent wider adoption.
This is achieved through interviews with software practitioners and business managers, and touches on both the business and technical reasons supporting the adoption (or not) of Reproducible Builds. The article also begins with an excellent explanation and literature review, and even introduces a new helpful analogy for reproducible builds:
[Users are] able to perform a bitwise comparison of the two binaries to verify that they are identical and that the distributed binary is indeed built from the source code in the way the provider claims. Applied in this manner, R-Bs function as a canary, a mechanism that indicates when something might be wrong, and offer an improvement in security over running unverified binaries on computer systems.
The full paper is available to download on an open access basis. Elsewhere in academia, Beatriz Michelson Reichert and Rafael R. Obelheiro have published a paper proposing a systematic threat model for a generic software development pipeline identifying possible mitigations for each threat (PDF). Under the Tampering rubric of their paper, various attacks against Continuous Integration (CI) processes:
An attacker may insert a backdoor into a CI or build tool and thus introduce vulnerabilities into the software (resulting in an improper build). To avoid this threat, it is the developer s responsibility to take due care when making use of third-party build tools. Tampered compilers can be mitigated using diversity, as in the diverse double compiling (DDC) technique. Reproducible builds, a recent research topic, can also provide mitigation for this problem. (PDF)

Misc news
On our mailing list this month:

Debian & other Linux distributions Over 50 reviews of Debian packages were added this month, another 48 were updated and almost 30 were removed, all of which adds to our knowledge about identified issues. Two new issue types were added as well. [ ][ ]. Vagrant Cascadian announced on our mailing list another online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMUs (Non-Maintainer Uploads). The first such sprint took place on September 22nd, but others were held on October 6th and October 20th. There were two additional sprints that occurred in November, however, which resulted in the following progress: Lastly, Roland Clobus posted his latest update of the status of reproducible Debian ISO images on our mailing list. This reports that all major desktops build reproducibly with bullseye, bookworm and sid as well as that no custom patches needed to applied to Debian unstable for this result to occur. During November, however, Roland proposed some modifications to live-setup and the rebuild script has been adjusted to fix the failing Jenkins tests for Debian bullseye [ ][ ].
In other news, Miro Hron ok proposed a change to clamp build modification times to the value of SOURCE_DATE_EPOCH. This was initially suggested and discussed on a devel@ mailing list post but was later written up on the Fedora Wiki as well as being officially proposed to Fedora Engineering Steering Committee (FESCo).

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 226 and 227 to Debian:
  • Support both python3-progressbar and python3-progressbar2, two modules providing the progressbar Python module. [ ]
  • Don t run Python decompiling tests on Python bytecode that file(1) cannot detect yet and Python 3.11 cannot unmarshal. (#1024335)
  • Don t attempt to attach text-only differences notice if there are no differences to begin with. (#1024171)
  • Make sure we recommend apksigcopier. [ ]
  • Tidy generation of os_list. [ ]
  • Make the code clearer around generating the Debian substvars . [ ]
  • Use our assert_diff helper in test_lzip.py. [ ]
  • Drop other copyright notices from lzip.py and test_lzip.py. [ ]
In addition to this, Christopher Baines added lzip support [ ], and FC Stegerman added an optimisation whereby we don t run apktool if no differences are detected before the signing block [ ].
A significant number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb ensuring the openEuler logo is correctly visible with a white background [ ], FC Stegerman de-duplicated by email address to avoid listing some contributors twice [ ], Herv Boutemy added Apache Maven to the list of affiliated projects [ ] and boyska updated our Contribute page to remark that the Reproducible Builds presence on salsa.debian.org is not just the Git repository but is also for creating issues [ ][ ]. In addition to all this, however, Holger Levsen made the following changes:
  • Add a number of existing publications [ ][ ] and update metadata for some existing publications as well [ ].
  • Hide draft posts on the website homepage. [ ]
  • Add the Warpforge build tool as a participating project of the summit. [ ]
  • Clarify in the footer that we welcome patches to the website repository. [ ]

Testing framework The Reproducible Builds project operates a comprehensive testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, the following changes were made by Holger Levsen:
  • Improve the generation of meta package sets (used in grouping packages for reporting/statistical purposes) to treat Debian bookworm as equivalent to Debian unstable in this specific case [ ] and to parse the list of packages used in the Debian cloud images [ ][ ][ ].
  • Temporarily allow Frederic to ssh(1) into our snapshot server as the jenkins user. [ ]
  • Keep some reproducible jobs Jenkins logs much longer [ ] (later reverted).
  • Improve the node health checks to detect failures to update the Debian cloud image package set [ ][ ] and to improve prioritisation of some kernel warnings [ ].
  • Always echo any IRC output to Jenkins output as well. [ ]
  • Deal gracefully with problems related to processing the cloud image package set. [ ]
Finally, Roland Clobus continued his work on testing Live Debian images, including adding support for specifying the origin of the Debian installer [ ] and to warn when the image has unmet dependencies in the package list (e.g. due to a transition) [ ].
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. You can get in touch with us via:

19 November 2022

Joerg Jaspert: From QNAP QTS to TrueNAS Scale

History, Setup So for quite some time I have a QNAP TS-873x here, equipped with 8 Western Digital Red 10 TB disks, plus 2 WD Blue 500G M2 SSDs. The QNAP itself has an AMD Embedded R-Series RX-421MD with 4 cores and was equipped with 48G RAM. Initially I had been quite happy, the system is nice. It was fast, it was easy to get to run and the setup of things I wanted was simple enough. All in a web interface that tries to imitate a kind of workstation feeling and also tries to hide that it is actually a webinterface. Natually with that amount of disks I had a RAID6 for the disks, plus RAID1 for the SSDs. And then configured as a big storage pool with the RAID1 as cache. Below the hood QNAP uses MDADM Raid and LVM (if you want, with thin provisioning), in some form of emdedded linux. The interface allows for regular snapshots of your storage with flexible enough schedules to create them, so it all appears pretty good.

QNAP slow Fast forward some time and it gets annoying. First off you really should have regular raid resyncs scheduled, and while you can set priorities on them and have them low priority, they make the whole system feel very sluggish, quite annoying. And sure, power failure (rare, but can happen) means another full resync run. Also, it appears all of the snapshots are always mounted to some /mnt/snapshot/something place (df on the system gets quite unusable). Second, the reboot times. QNAP seems to be affected by the more features, fuck performance virus, and bloat their OS with more and more features while completly ignoring the performance. Everytime they do an upgrade it feels worse. Lately reboot times went up to 10 to 15 minutes - and then it still hadn t started the virtual machines / docker containers one might run on. Another 5 to 10 minutes for those. Opening the file explorer - ages on calculating what to show. Trying to get the storage setup shown? Go get a coffee, but please fetch the beans directly from the plantation, or you are too fast. Annoying it was. And no, no broken disks or fan or anything, it all checks out fine.

Replace QNAPs QTS system So I started looking around what to do. More RAM may help a little bit, but I already had 48G, the system itself appears to only do 64G maximum, so not much chance of it helping enough. Hardware is all fine and working, so software needs to be changed. Sounds hard, but turns out, it is not.

TrueNAS And I found that multiple people replaced the QNAPs own system with a TrueNAS installation and generally had been happy. Looking further I found that TrueNAS has a variant called Scale - which is based on Debian. Doubly good, that, so I went off checking what I may need for it.

Requirements Heck, that was a step back. To install TrueNAS you need an HDMI out and a disk to put it on. The one that QTS uses is too small, so no option.
QNAPs  internal USB disk QNAPs original internal USB drive, DOM
So either use one of the SSDs that played cache (and should do so again in TrueNAS, or get the QNAP original replaced. HDMI out is simple, get a cheap card and put it into one of the two PCIe-4x slots, done. The disk thing looked more complicated, as QNAP uses some internal usb stick thing . Turns out it is just a USB stick that has an 8+1pin connector. Couldn t find anything nice as replacement, but hey, there are 9-pin to USB-A adapters.
9PIN to USB A a 9pin to USB A adapter
With that adapter, one can take some random M2 SSD and an M2-to-USB case, plus some cabling, and voila, we have a nice system disk.
USB 9pin  to USB-A cable connected to Motherboard and some more cable 9pin adapter to USB-A connected with some more cable
Obviously there isn t a good place to put this SSD case and cable, but the QNAP case is large enough to find space and use some cable ties to store it safely. Space enough to get the cable from the side, where the mainboard is to the place I mounted it, so all fine.
Mounted  SSD in external case, also shows the video card Mounted SSD in its external case
The next best M2 SSD was a Western Digital Red with 500G - and while this is WAY too much for TrueNAS, it works. And hey, only using a tiny fraction? Oh so much more cells available internally to use when others break. Or something Together with the Asus card mounted I was able to install TrueNAS. Which is simple, their installer is easy enough to follow, just make sure to select the right disk to put it on.

Preserving data during the move Switching from QNAP QTS to TrueNAS Scale means changing from MDADM Raid with LVM and ext4 on top to ZFS and as such all data on it gets erased. So a backup first is helpful, and I got myself two external Seagate USB Disks of 6TB each - enough for the data I wanted to keep. Copying things all over took ages, especially as the QNAP backup thingie sucks, it was breaking quite often. Also, for some reason I did not investigate, the performance of it was real bad. It started at a maximum of 50MB/s, but the last terabyte of data was copied at MUCH less than that, and so it took much longer than I anticipated. Copying back was slow too, but much less so. Of course reading things usually is faster than writing, with it going around 100MB/s most of the time, which is quite a bit more - still not what USB3 can actually do, but I guess the AMD chip doesn t want to go that fast.

TrueNAS experience The installation went mostly smooth, the only real trouble had been on my side. Turns out that a bad network cable does NOT help the network setup, who would have thought. Other than that it is the usual set of questions you would expect, a reboot, and then some webinterface. And here the differences start. The whole system boots up much faster. Not even a third of the time compared to QTS. One important thing: As TrueNAS scale is Debian based, and hence a linux kernel, it automatically detects and assembles the old RAID arrays that QTS put on. Which TrueNAS can do nothing with, so it helps to manually stop them and wipe the disks. Afterwards I put ZFS on the disks, with a similar setup to what I had before. The spinning rust are the data disks in a RAIDZ2 setup, the two SSDs are added as cache devices. Unlike MDADM, ZFS does not have a long sync process. Also unlike the MDADM/LVM/EXT4 setup from before, ZFS works different. It manages the raid thing but it also does the volume and filesystem parts. Quite different handling, and I m still getting used to it, so no, I won t write some ZFS introduction now.

Features The two systems can not be compared completly, they are having a pretty different target audience. QNAP is more for the user that wants some network storage that offers a ton of extra features easily available via a clickable interface. While TrueNAS appears more oriented to people that want a fast but reliable storage system. TrueNAS does not offer all the extra bloat the QNAP delivers. Still, you have the ability to run virtual machines and it seems it comes with Rancher, so some kubernetes/container ability is there. It lacks essential features like assigning PCI devices to virtual machines, so is not useful right now, but I assume that will come in a future version. I am still exploring it all, but I like what I have right now. Still rebuilding my setup to have all shares exported and used again, but the most important are working already.

3 November 2022

Arturo Borrero Gonz lez: New OpenPGP key and new email

Post logo I m trying to replace my old OpenPGP key with a new one. The old key wasn t compromised or lost or anything bad. Is still valid, but I plan to get rid of it soon. It was created in 2013. The new key id fingerprint is: AA66280D4EF0BFCC6BFC2104DA5ECB231C8F04C4 I plan to use the new key for things like encrypted emails, uploads to the Debian archive, and more. Also, the new key includes an identity with a newer personal email address I plan to use soon: arturo.bg@arturo.bg The new key has been uploaded to some public keyservers. If you would like to sign the new key, please follow the steps in the Debian wiki.
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBGNjvX4BEADE4w5x0SQmxWLAI1R17RCC98ngTkD/FMyos0GF5xmv0VJeLYhw
x6oJRmiNGHY8+gjq7SyVCWmlwbLKBEPFNI1k5WcrTB+ClgGkWB5KBnbLKm6CSP4N
ccSbrUQrZW+zxk3Q5h3CJljZpmflB2dvRfnDMSSaw8zOc37EtszW3AVVKNYAu3wj
mXpfwI72/OSELhSvhkr51L+ZlEYUMCITeO+jpiWsnU+sA8oKKPjW4+X8cjrN4eFa
1PAPILDf+Omst5SKM2aV5LGZ8rBzb5wNJF6yDexDw2XmfbFWLOfYzFRY6GTXJz/p
8Fh6O1wkHM9RnwmesCXTtkaGQsVFiVsoqGFyzrkIdWPUruB3RG5EzOkapWi/cnbD
1sy7yrUgy99Ew5yzmLaZ40hmRyq/gBBw4yRkdQaddbkErx+9hT+2tJELa5wrmWkb
FtaVZ38xC6gacOZqRjp0Xqtr0jobI0vED8vzIyY0zJwWM0Hu6qqq4hkLWZHjCy8a
T5Oe/Cb78Kqwa2mzJfncDahPxcgxpnbkYdvKokRtNBDftLVEz+Do8Dczw7Me4BoK
HmU8wLyeGeDTmeoBXpxKH90T+rQokgsiiD13bWZ+nBxILun1tjOTVVONG6SHdP3f
unolq8SU3K+m67lLa+pWjyYcNRS2OTWGOz/1zsH2R39ZOyfGD09/10aAKwARAQAB
tC1BcnR1cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJnQGFydHVyby5iZz6J
AlQEEwEKAD4WIQSqZigNTvC/zGv8IQTaXssjHI8ExAUCY2O9fgIbAwUJA8JnAAUL
CQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRDaXssjHI8ExCZdD/9Z3vR4sV7vBED4
+mCjdNWWf/mw5YlkZo+XQiMVVss4HfQLdt7VxXgGdcOz5Hond9ax3+qeCEo4DdXq
TC0ACpSCu/TPil6vzbE/kO6i6a4oZjFyteAbbcMXP35stbtDM0U5EZH0adIKknfF
msIPTIdJ/dpkcshtBJIoPqjuuTEBa7bF3OYCajHVqwP4Wsgjy4TvDOwl3hy7bhrQ
ZZHqbh7kW40+alQYaJ8jDvbDh/jhN1/pEiZS9ETu0JfBAF3PYPRLW6XedvwZiPWd
jTXwJd0E+vN5LE1Go8OaYvZb9iitZ21UaYOUnFuhw7SEOSQGfEUBs39+41gBj6vW
05HKCEA6kda9NpfptMbUoSSU+hwRfNA5TdnlxtcRv4NqUigzqa1LoXLdxTsyus+K
BL7dRpKXc72JCrEA3vClisD2FgsxLLRCCSDVM8UM/it/YW7tv42XuhQkTW+okQX4
c5laMzTL+ZV8UOoshseTDOsQsdXhskdnWbnuSwAez2/Dd1gHczuN/+lPiiEnyaTF
XgH17K/F25+92MmwPQcFRVPQcYcbyx1VylA6aCgK6gOEqHCejlZv5XLouzbQh1j1
k6MjUR1ncz8vPV5xSuOMAISqozJ9GxUZT2O3o9Vc9pNg5UEzqTvyURgLOdie8yM4
T93S3nKuHVZ++ZVxEOlPnfEfbFP+xbQrQXJ0dXJvIEJvcnJlcm8gR29uemFsZXog
PGFydHVyb0BkZWJpYW4ub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY73LAhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEMKQQAIe18Np+jdhwxHEFZNppBQ69BtyrnPQg4K5VngZ0NUZdVi+/FU7q
Tc9Z1qNydnXgmav3dafL2/l5zDX9wz7mQD2F0a6luOxZwl1PE6iP5f3cUD7uC9zb
148i1bZGEJbO4iNZKTlJKlbNR9m1PG47pv964CHZnNGp6lsnEspxe2G8DJD48Pje
gbhYukgOtIhQ1CaB1fc8aVwZvXZVSbNBLAqp7pAGhTFJqzHE8/U0sn1/V/wPzFAd
TZtWzKfYAkIIFJI5Rr6LVApIwIe7nWymTdgH4crCd2GZkGR+d6ihPKVSxUAUfoAx
EJQUSJY8rYi39gSDhPuEoK8BYXS1nWFGJiNV1o8xaljQo8rNT9myCaeZuQBLX41/
LRzK4XrxYPvjZpKNucc7fSK+UFriQGzdcAaWtW45Kp/8GmAoLVyCD0DPZNWNJdxp
IORhB33aWakhvDKgaLQa16MJ8fSc3ytn/1lxWzDXA1j05i81y/AOKPtCwBKzQWPF
biuZs3kJgZagLq6L6VOQDHlKqf+jqfl1fWeo04iDg98e0TYKABUfiTz8/MdQcV/X
8VkCgtuZ8BcPPyYzBjvuXWZTvdu0n2pikqAPL4u2cbWfD8JIP2AVCJp9HMGKvENo
XcJgY4h6T3rrC/9EidxECfXlsDbUJxLq0WfJLik84+LRtde3kZiReaIRtC5BcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvQG5ldGZpbHRlci5vcmc+iQJUBBMB
CgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvd8CGwMFCQPCZwAFCwkIBwMF
FQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMSP/g/+MHmxCAi/X+NMHodg9Qou
wEG4Vf1uluAE6c+c1QECCdtSsRjBs1dZoJzGsA23t4LWqluyaptuLDWJQEz+EVKR
mG0bvvropNaoOEShnY069pg7lUHuO/GLeDRhfEH3KT45sIVbLly8QkoGaINSCDLe
RBNaHC6feIC8NfQzQEt72nbi4SgdSQUg0F3lj4WxxECVhXsw/YCqh1d3QYqwRVEE
lCGQ4EbavjtRhO8U7dcL1VwHemKHNq3XvM3PJf1OoPgxWqFW5rHbAdlXdN3WAI6u
DAy7kY+qihz3w6rIDTFq6I3YBTrZ44J+5mN21ZC2iDXAsa/C3Uam0vFsjs/pizuq
WgGI9Vmsyap+bOOjuRSX4hemZoOT4a2GC723fS1dFresYWo3MmwfA3sjgV5tK3ZN
XIpxYIvi6HAHLOAarDaE8Sha1GHvrmPwfZ+cEgTL0mqW3efSF3AFmGHduMB+agzK
rM9sksrRQhbY2fHnBLo1t06SQx3rmhlz5mD1ljQEIzna9D6QKleRu4hgImRLHnCB
CN3o+mZa1MHhaIFzViaD2i3Fv2+bYgT7vnS4QAneLW8O/ZgpAc2MUxMoci5JNyfJ
mWdae7Kbs4Z8rrt/mH2gYyioSB0po4VtVwKWEUW9cLtZusA6mFnMviFpfjakb9TX
MimBAv9hAYpxd+HdfHinmqS0MEFydHVybyBCb3JyZXJvIEdvbnphbGV6IDxhYm9y
cmVyb0B3aWtpbWVkaWEub3JnPokCVAQTAQoAPhYhBKpmKA1O8L/Ma/whBNpeyyMc
jwTEBQJjY735AhsDBQkDwmcABQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJENpe
yyMcjwTEGooP/20PR5N34m7CNtyaO96H5W0ULuAuSNuoXaKWDo5LGU6zzDriXbIu
ryYtR66vWF5suf7fHZYX8Ufq4PEsG1UNYEGA9hnjPg3oVwGzBJI7f6Rl2P5Pc8wJ
Eq2kN/xKmfUKIrvgh1f5xgFqC4hzcLDkVlLsPowZWfep8dLY4mtVrsrCD1URhelw
zRDGZ3rTVHWXmfXbSHWR2bgZIIrCtVF8BHStg5b6HuAWpj4Oa0eMfBde0N2RZkLE
ye/r2y/lraHfpT7MXnRMcEmltrv8fic7yvj/Nh4ESWr7UmfbV+GiSw9dc/AlVMXM
ihaW0eXv4F5uMtLJOiqI7bv3UfWSvoqwf2a8EPnzOeBBHhQOOJN7O4UzKBK5GAO8
C3k0I1AV3cTmrXrqT/5yoYAHSekDFCIPES//6Y/pO0ITtCbXkA5e8vaulJbtyXpE
g0Z7I7M1kikL6reZ2PuzsR0psEb/x81bWXODIegyOJolPXMRAY7n9J0xpCnSW9yr
CN4j6YT3Oame04JslwX5Xg1cyheuiusotETYNSKRaGaYBCxYffOWoTLNIBa+RCGc
SVOzJq5pd8fVRM1h2ZZFnfpPJBUb62qPsbk6VwmesGoGevB70zcNQYEI+c35kRfM
IOuJWRIN3Wxx0rpxb5E3i/3TASHM86Dix1VW9vsC/atGU/cgaoTOiNVztDdBcnR1
cm8gQm9ycmVybyBHb256YWxleiA8YXJ0dXJvLmJvcnJlcm8uZ2xlekBnbWFpbC5j
b20+iQJUBBMBCgA+FiEEqmYoDU7wv8xr/CEE2l7LIxyPBMQFAmNjvg8CGwMFCQPC
ZwAFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQ2l7LIxyPBMS7NA/9F7OL/j7a
xnTDjxAHEiyrCzrBQc/DEAM/yim8E+0UBeTJSZR/bShtbvLbSukeL43tKksPhN/X
skjRF8sJ8KWUnpmSWjv1DQTh7AtkJqACnq7+VtQZq3yuKUCNRNpM8lSFxtmYDUqE
XXD4eMXKoJfdphQ+qpViba+RGXg6sd69Dq739zT/OFMuKZ33z8h7hVNXmoWGcBz6
txvN3cWVJhTLdiBvtn38/0dX7IupQLypLOtP0oZdjoUjkRxTo5biOxt3hUGnxS4x
97PPeRGc4j7lv5ADwFV8bo+g54ZMGRjOcyZmA7dlWFN51JrTx3udW2jgXkYqm7UM
xP4lNwDs9TmT3jan6wR08uwlDakOXfDm3gCQEviN+350sJs2tY+JKBN4QR7NpqeU
2aDFOo0G/0ggf0QbFsMkaTSozerVHRGXMdAi+pbYA6pPWPu8lHIkvvdoj4xUu+Ko
cHX0DCRxmL9mylTbZEanrp5gSpne79McrkbQX2/Yc8lWykCtL5/jHVTD4iNiO5Rf
IJYPAVmC2nlj2URfzwGjjoL5apTStZfng4H2Ccq+3cmhwOXI7pb+PsGeI5PND00A
qHFxe590HFhPxLHoftMIlspstoCvHYGcWQxHNbXW6ccmhHdNYT8Pn4ecKgfr6pCt
0ysilOD2ppPJ88hffKA4nTdtX2Tz2ZwOYwG5Ag0EY2O9fgEQALrapVuv1IcLDit8
9gejdA/Dtlufb2/baImVaQD+dTx2QdMxxEiNKl00a5OhMzXDj9tFrB1Lv4z0t8cY
iDJ+NuydDGgz3MlJgWW0GlpAz8yiul2iqTnkWl3cWeiI+VaX8wzL+acmmkPvlrN8
hM7I55BPr8uBWVIQ7VDmI+ts8gi73xE+Etzzrh13GSSnnYnezfGUQrNfYFcip7D0
hB3bpUIGiPdQ45vSZqXUQx/B6FlabiIGRau8Rt4vaEBGXGFZ9rIR+rMJWx6GqYX4
uY1KM2JZ3SKHk++MWGYdzHdM2oaP6xckZq+u/WiwutkYLLO2hnr03lcAu1IDT1C1
YNPrbTKfqUt+3r0oUK5BrG1Cjdc1mZqcXzYcexOLp79FJLb0t5wPdfgU8dT10kjE
uQxeSYiS4oSpikVQkKoFk++/U95d/z/y/81A6v+cfRus6mW+wRSFSwks7Q5ct7zW
UyKELLC4i4EDgnJXmavVcBD0TWzhH/rZpz9FsO4Mb18IYwbV1/144019/RjiPk5Z
MMNdsjorjV2MtrCIoeAGRgZhbFP2P7CcZOp6ZWzjj40ENlElbLp3VCfkYcTiPHJv
2iaiDz2Mhfmhb1Q/5d/a9tYTYINPmv2QVo+m5Zf+1/U29d2HZMRhD4aqDsivvgtd
GpAnKeus6ePSMqpwjO6v2bmQhjpbABEBAAGJAjwEGAEKACYWIQSqZigNTvC/zGv8
IQTaXssjHI8ExAUCY2O9fgIbDAUJA8JnAAAKCRDaXssjHI8ExA5AD/9VWS1/jHM9
aE3HKCDL4CpiXQPc4ds+3/ft6LXwuCMA/tkt8I4svKZGCCi/X5NfiQetVD+cSzVO
nmloctMt/24yjnGNNSFsDozkn/RqzZIhLJBI69gX4JWR4wpeh4kXMItNM5ZlYw3H
DmuLrf/ey8E2NzbFdzj1VQNoENuwtL2pIJrvK92AcS7acvP0FpiS8riLc5a933SW
oPgelQ1j/04WAH8cyKXB/pruq3OhtK0/b8ylIeI0f7a57dxQj5wysyBVKl+EJd/n
UhypVqMDRWL7N0FttGb9gZ6OVvQnt7iwbtS3tYqAK479+GZwi/Wh/RB2dCDyz8jk
zE0j6y7huP4XzpbBbPVntLDdVAYmpW6iIaTWYxlu79FEUw4JmZdY7hJoEDpHuDIz
ylo0YQgjnRfRfWSdnGCosFrY5UgThPVTaQAILCPtdVyWY4/6s1UaeNs3H0PRA5mz
UT4vDKxGq9gXHnE+qg3dfwMcLR3cDPPWUFVeTfNitZ3Y9eV7SdbQXt5NeOXzFadz
DBc9ZzNx3rBEyUUooU0MEmbltyUFM7R/hVcdpFxs12SgHrvgh13tuxVVVNBXTwwo
pSxmap42vHJERQ8ZJQ4lrvnxNZcuwLHSZK7xVzb0b/1wMooNnhw18vlStMWQJwKl
DiXs/L/ifab2amg9jshULAPgVSw7QeP2OQ==
=UABf
-----END PGP PUBLIC KEY BLOCK-----
If you are curious about what that long code block contains, check this https://cirw.in/gpg-decoder/ For the record, the old key fingerprint is: DD9861AB23DC3333892E07A968E713981D1515F8 Cheers!

7 October 2022

Reproducible Builds: Reproducible Builds in September 2022

Welcome to the September 2022 report from the Reproducible Builds project! In our reports we try to outline the most important things that we have been up to over the past month. As a quick recap, whilst anyone may inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries. If you are interested in contributing to the project, please visit our Contribute page on our website.
David A. Wheeler reported to us that the US National Security Agency (NSA), Cybersecurity and Infrastructure Security Agency (CISA) and the Office of the Director of National Intelligence (ODNI) have released a document called Securing the Software Supply Chain: Recommended Practices Guide for Developers (PDF). As David remarked in his post to our mailing list, it expressly recommends having reproducible builds as part of advanced recommended mitigations . The publication of this document has been accompanied by a press release.
Holger Levsen was made aware of a small Microsoft project called oss-reproducible. Part of, OSSGadget, a larger collection of tools for analyzing open source packages , the purpose of oss-reproducible is to:
analyze open source packages for reproducibility. We start with an existing package (for example, the NPM left-pad package, version 1.3.0), and we try to answer the question, Do the package contents authentically reflect the purported source code?
More details can be found in the README.md file within the code repository.
David A. Wheeler also pointed out that there are some potential upcoming changes to the OpenSSF Best Practices badge for open source software in relation to reproducibility. Whilst the badge programme has three certification levels ( passing , silver and gold ), the gold level includes the criterion that The project MUST have a reproducible build . David reported that some projects have argued that this reproducibility criterion should be slightly relaxed as outlined in an issue on the best-practices-badge GitHub project. Essentially, though, the claim is that the reproducibility requirement doesn t make sense for projects that do not release built software, and that timestamp differences by themselves don t necessarily indicate malicious changes. Numerous pragmatic problems around excluding timestamps were raised in the discussion of the issue.
Sonatype, a pioneer of software supply chain management , issued a press release month to report that they had found:
[ ] a massive year-over-year increase in cyberattacks aimed at open source project ecosystems. According to early data from Sonatype s 8th annual State of the Software Supply Chain Report, which will be released in full this October, Sonatype has recorded an average 700% jump in repository attacks over the last three years.
More information is available in the press release.
A number of changes were made to the Reproducible Builds website and documentation this month, including Chris Lamb adding a redirect from /projects/ to /who/ in order to keep old or archived links working [ ], Jelle van der Waa added a Rust programming language example for SOURCE_DATE_EPOCH [ ][ ] and Mattia Rizzolo included Protocol Labs amongst our project-level sponsors [ ].

Debian There was a large amount of reproducibility work taking place within Debian this month:
  • The nfft source package was removed from the archive, and now all packages in Debian bookworm now have a corresponding .buildinfo file. This can be confirmed and tracked on the associated page on the tests.reproducible-builds.org site.
  • Vagrant Cascadian announced on our mailing list an informal online sprint to help clear the huge backlog of reproducible builds patches submitted by performing NMU (Non-Maintainer Uploads). The first such sprint took place on September 22nd with the following results:
    • Holger Levsen:
      • Mailed #1010957 in man-db asking for an update and whether to remove the patch tag for now. This was subsequently removed and the maintainer started to address the issue.
      • Uploaded gmp to DELAYED/15, fixing #1009931.
      • Emailed #1017372 in plymouth and asked for the maintainer s opinion on the patch. This resulted in the maintainer improving Vagrant s original patch (and uploading it) as well as filing an issue upstream.
      • Uploaded time to DELAYED/15, fixing #983202.
    • Vagrant Cascadian:
      • Verify and updated patch for mylvmbackup (#782318)
      • Verified/updated patches for libranlip. (#788000, #846975 & #1007137)
      • Uploaded libranlip to DELAYED/10.
      • Verified patch for cclive. (#824501)
      • Uploaded cclive to DELAYED/10.
      • Vagrant was unable to reproduce the underlying issue within #791423 (linuxtv-dvb-apps) and so the bug was marked as done .
      • Researched #794398 (in clhep).
    The plan is to repeat these sprints every two weeks, with the next taking place on Thursday October 6th at 16:00 UTC on the #debian-reproducible IRC channel.
  • Roland Clobus posted his 13th update of the status of reproducible Debian ISO images on our mailing list. During the last month, Roland ensured that the live images are now automatically fed to openQA for automated testing after they have been shown to be reproducible. Additionally Roland asked on the debian-devel mailing list about a way to determine the canonical timestamp of the Debian archive. [ ]
  • Following up on last month s work on reproducible bootstrapping, Holger Levsen filed two bugs against the debootstrap and cdebootstrap utilities. (#1019697 & #1019698)
Lastly, 44 reviews of Debian packages were added, 91 were updated and 17 were removed this month adding to our knowledge about identified issues. A number of issue types have been updated too, including the descriptions of cmake_rpath_contains_build_path [ ], nondeterministic_version_generated_by_python_param [ ] and timestamps_in_documentation_generated_by_org_mode [ ]. Furthermore, two new issue types were created: build_path_used_to_determine_version_or_package_name [ ] and captures_build_path_via_cmake_variables [ ].

Other distributions In openSUSE, Bernhard M. Wiedemann published his usual openSUSE monthly report.

diffoscope diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 222 and 223 to Debian, as well as made the following changes:
  • The cbfstools utility is now provided in Debian via the coreboot-utils package so we can enable that functionality within Debian. [ ]
  • Looked into Mach-O support.
  • Fixed the try.diffoscope.org service by addressing a compatibility issue between glibc/seccomp that was preventing the Docker-contained diffoscope instance from spawning any external processes whatsoever [ ]. I also updated the requirements.txt file, as some of the specified packages were no longer available [ ][ ].
In addition Jelle van der Waa added support for file version 5.43 [ ] and Mattia Rizzolo updated the packaging:
  • Also include coreboot-utils in the Build-Depends and Test-Depends fields so that it is available for tests. [ ]
  • Use pep517 and pip to load the requirements. [ ]
  • Remove packages in Breaks/Replaces that have been obsoleted since the release of Debian bullseye. [ ]

Reprotest reprotest is our end-user tool to build the same source code twice in widely and deliberate different environments, and checking whether the binaries produced by the builds have any differences. This month, reprotest version 0.7.22 was uploaded to Debian unstable by Holger Levsen, which included the following changes by Philip Hands:
  • Actually ensure that the setarch(8) utility can actually execute before including an architecture to test. [ ]
  • Include all files matching *.*deb in the default artifact_pattern in order to archive all results of the build. [ ]
  • Emit an error when building the Debian package if the Debian packaging version does not patch the Python version of reprotest. [ ]
  • Remove an unneeded invocation of the head(1) utility. [ ]

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. This month, however, the following changes were made:
  • Holger Levsen:
    • Add a job to build reprotest from Git [ ] and use the correct Git branch when building it [ ].
  • Mattia Rizzolo:
    • Enable syncing of results from building live Debian ISO images. [ ]
    • Use scp -p in order to preserve modification times when syncing live ISO images. [ ]
    • Apply the shellcheck shell script analysis tool. [ ]
    • In a build node wrapper script, remove some debugging code which was messing up calling scp(1) correctly [ ] and consquently add support to use both scp -p and regular scp [ ].
  • Roland Clobus:
    • Track and handle the case where the Debian archive gets updated between two live image builds. [ ]
    • Remove a call to sudo(1) as it is not (or no longer) required to delete old live-build results. [ ]

Contact As ever, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

30 August 2022

John Goerzen: The PC & Internet Revolution in Rural America

Inspired by several others (such as Alex Schroeder s post and Szcze uja s prompt), as well as a desire to get this down for my kids, I figure it s time to write a bit about living through the PC and Internet revolution where I did: outside a tiny town in rural Kansas. And, as I ve been back in that same area for the past 15 years, I reflect some on the challenges that continue to play out. Although the stories from the others were primarily about getting online, I want to start by setting some background. Those of you that didn t grow up in the same era as I did probably never realized that a typical business PC setup might cost $10,000 in today s dollars, for instance. So let me start with the background.

Nothing was easy This story begins in the 1980s. Somewhere around my Kindergarten year of school, around 1985, my parents bought a TRS-80 Color Computer 2 (aka CoCo II). It had 64K of RAM and used a TV for display and sound. This got you the computer. It didn t get you any disk drive or anything, no joysticks (required by a number of games). So whenever the system powered down, or it hung and you had to power cycle it a frequent event you d lose whatever you were doing and would have to re-enter the program, literally by typing it in. The floppy drive for the CoCo II cost more than the computer, and it was quite common for people to buy the computer first and then the floppy drive later when they d saved up the money for that. I particularly want to mention that computers then didn t come with a modem. What would be like buying a laptop or a tablet without wifi today. A modem, which I ll talk about in a bit, was another expensive accessory. To cobble together a system in the 80s that was capable of talking to others with persistent storage (floppy, or hard drive), screen, keyboard, and modem would be quite expensive. Adjusted for inflation, if you re talking a PC-style device (a clone of the IBM PC that ran DOS), this would easily be more expensive than the Macbook Pros of today. Few people back in the 80s had a computer at home. And the portion of those that had even the capability to get online in a meaningful way was even smaller. Eventually my parents bought a PC clone with 640K RAM and dual floppy drives. This was primarily used for my mom s work, but I did my best to take it over whenever possible. It ran DOS and, despite its monochrome screen, was generally a more capable machine than the CoCo II. For instance, it supported lowercase. (I m not even kidding; the CoCo II pretty much didn t.) A while later, they purchased a 32MB hard drive for it what luxury! Just getting a machine to work wasn t easy. Say you d bought a PC, and then bought a hard drive, and a modem. You didn t just plug in the hard drive and it would work. You would have to fight it every step of the way. The BIOS and DOS partition tables of the day used a cylinder/head/sector method of addressing the drive, and various parts of that those addresses had too few bits to work with the big drives of the day above 20MB. So you would have to lie to the BIOS and fdisk in various ways, and sort of work out how to do it for each drive. For each peripheral serial port, sound card (in later years), etc., you d have to set jumpers for DMA and IRQs, hoping not to conflict with anything already in the system. Perhaps you can now start to see why USB and PCI were so welcomed.

Sharing and finding resources Despite the two computers in our home, it wasn t as if software written on one machine just ran on another. A lot of software for PC clones assumed a CGA color display. The monochrome HGC in our PC wasn t particularly compatible. You could find a TSR program to emulate the CGA on the HGC, but it wasn t particularly stable, and there s only so much you can do when a program that assumes color displays on a monitor that can only show black, dark amber, or light amber. So I d periodically get to use other computers most commonly at an office in the evening when it wasn t being used. There were some local computer clubs that my dad took me to periodically. Software was swapped back then; disks copied, shareware exchanged, and so forth. For me, at least, there was no online to download software from, and selling software over the Internet wasn t a thing at all.

Three Different Worlds There were sort of three different worlds of computing experience in the 80s:
  1. Home users. Initially using a wide variety of software from Apple, Commodore, Tandy/RadioShack, etc., but eventually coming to be mostly dominated by IBM PC clones
  2. Small and mid-sized business users. Some of them had larger minicomputers or small mainframes, but most that I had contact with by the early 90s were standardized on DOS-based PCs. More advanced ones had a network running Netware, most commonly. Networking hardware and software was generally too expensive for home users to use in the early days.
  3. Universities and large institutions. These are the places that had the mainframes, the earliest implementations of TCP/IP, the earliest users of UUCP, and so forth.
The difference between the home computing experience and the large institution experience were vast. Not only in terms of dollars the large institution hardware could easily cost anywhere from tens of thousands to millions of dollars but also in terms of sheer resources required (large rooms, enormous power circuits, support staff, etc). Nothing was in common between them; not operating systems, not software, not experience. I was never much aware of the third category until the differences started to collapse in the mid-90s, and even then I only was exposed to it once the collapse was well underway. You might say to me, Well, Google certainly isn t running what I m running at home! And, yes of course, it s different. But fundamentally, most large datacenters are running on x86_64 hardware, with Linux as the operating system, and a TCP/IP network. It s a different scale, obviously, but at a fundamental level, the hardware and operating system stack are pretty similar to what you can readily run at home. Back in the 80s and 90s, this wasn t the case. TCP/IP wasn t even available for DOS or Windows until much later, and when it was, it was a clunky beast that was difficult. One of the things Kevin Driscoll highlights in his book called Modem World see my short post about it is that the history of the Internet we usually receive is focused on case 3: the large institutions. In reality, the Internet was and is literally a network of networks. Gateways to and from Internet existed from all three kinds of users for years, and while TCP/IP ultimately won the battle of the internetworking protocol, the other two streams of users also shaped the Internet as we now know it. Like many, I had no access to the large institution networks, but as I ve been reflecting on my experiences, I ve found a new appreciation for the way that those of us that grew up with primarily home PCs shaped the evolution of today s online world also.

An Era of Scarcity I should take a moment to comment about the cost of software back then. A newspaper article from 1985 comments that WordPerfect, then the most powerful word processing program, sold for $495 (or $219 if you could score a mail order discount). That s $1360/$600 in 2022 money. Other popular software, such as Lotus 1-2-3, was up there as well. If you were to buy a new PC clone in the mid to late 80s, it would often cost $2000 in 1980s dollars. Now add a printer a low-end dot matrix for $300 or a laser for $1500 or even more. A modem: another $300. So the basic system would be $3600, or $9900 in 2022 dollars. If you wanted a nice printer, you re now pushing well over $10,000 in 2022 dollars. You start to see one barrier here, and also why things like shareware and piracy if it was indeed even recognized as such were common in those days. So you can see, from a home computer setup (TRS-80, Commodore C64, Apple ][, etc) to a business-class PC setup was an order of magnitude increase in cost. From there to the high-end minis/mainframes was another order of magnitude (at least!) increase. Eventually there was price pressure on the higher end and things all got better, which is probably why the non-DOS PCs lasted until the early 90s.

Increasing Capabilities My first exposure to computers in school was in the 4th grade, when I would have been about 9. There was a single Apple ][ machine in that room. I primarily remember playing Oregon Trail on it. The next year, the school added a computer lab. Remember, this is a small rural area, so each graduating class might have about 25 people in it; this lab was shared by everyone in the K-8 building. It was full of some flavor of IBM PS/2 machines running DOS and Netware. There was a dedicated computer teacher too, though I think she was a regular teacher that was given somewhat minimal training on computers. We were going to learn typing that year, but I did so well on the very first typing program that we soon worked out that I could do programming instead. I started going to school early these machines were far more powerful than the XT at home and worked on programming projects there. Eventually my parents bought me a Gateway 486SX/25 with a VGA monitor and hard drive. Wow! This was a whole different world. It may have come with Windows 3.0 or 3.1 on it, but I mainly remember running OS/2 on that machine. More on that below.

Programming That CoCo II came with a BASIC interpreter in ROM. It came with a large manual, which served as a BASIC tutorial as well. The BASIC interpreter was also the shell, so literally you could not use the computer without at least a bit of BASIC. Once I had access to a DOS machine, it also had a basic interpreter: GW-BASIC. There was a fair bit of software written in BASIC at the time, but most of the more advanced software wasn t. I wondered how these .EXE and .COM programs were written. I could find vague references to DEBUG.EXE, assemblers, and such. But it wasn t until I got a copy of Turbo Pascal that I was able to do that sort of thing myself. Eventually I got Borland C++ and taught myself C as well. A few years later, I wanted to try writing GUI programs for Windows, and bought Watcom C++ much cheaper than the competition, and it could target Windows, DOS (and I think even OS/2). Notice that, aside from BASIC, none of this was free, and none of it was bundled. You couldn t just download a C compiler, or Python interpreter, or whatnot back then. You had to pay for the ability to write any kind of serious code on the computer you already owned.

The Microsoft Domination Microsoft came to dominate the PC landscape, and then even the computing landscape as a whole. IBM very quickly lost control over the hardware side of PCs as Compaq and others made clones, but Microsoft has managed in varying degrees even to this day to keep a stranglehold on the software, and especially the operating system, side. Yes, there was occasional talk of things like DR-DOS, but by and large the dominant platform came to be the PC, and if you had a PC, you ran DOS (and later Windows) from Microsoft. For awhile, it looked like IBM was going to challenge Microsoft on the operating system front; they had OS/2, and when I switched to it sometime around the version 2.1 era in 1993, it was unquestionably more advanced technically than the consumer-grade Windows from Microsoft at the time. It had Internet support baked in, could run most DOS and Windows programs, and had introduced a replacement for the by-then terrible FAT filesystem: HPFS, in 1988. Microsoft wouldn t introduce a better filesystem for its consumer operating systems until Windows XP in 2001, 13 years later. But more on that story later.

Free Software, Shareware, and Commercial Software I ve covered the high cost of software already. Obviously $500 software wasn t going to sell in the home market. So what did we have? Mainly, these things:
  1. Public domain software. It was free to use, and if implemented in BASIC, probably had source code with it too.
  2. Shareware
  3. Commercial software (some of it from small publishers was a lot cheaper than $500)
Let s talk about shareware. The idea with shareware was that a company would release a useful program, sometimes limited. You were encouraged to register , or pay for, it if you liked it and used it. And, regardless of whether you registered it or not, were told please copy! Sometimes shareware was fully functional, and registering it got you nothing more than printed manuals and an easy conscience (guilt trips for not registering weren t necessarily very subtle). Sometimes unregistered shareware would have a nag screen a delay of a few seconds while they told you to register. Sometimes they d be limited in some way; you d get more features if you registered. With games, it was popular to have a trilogy, and release the first episode inevitably ending with a cliffhanger as shareware, and the subsequent episodes would require registration. In any event, a lot of software people used in the 80s and 90s was shareware. Also pirated commercial software, though in the earlier days of computing, I think some people didn t even know the difference. Notice what s missing: Free Software / FLOSS in the Richard Stallman sense of the word. Stallman lived in the big institution world after all, he worked at MIT and what he was doing with the Free Software Foundation and GNU project beginning in 1983 never really filtered into the DOS/Windows world at the time. I had no awareness of it even existing until into the 90s, when I first started getting some hints of it as a port of gcc became available for OS/2. The Internet was what really brought this home, but I m getting ahead of myself. I want to say again: FLOSS never really entered the DOS and Windows 3.x ecosystems. You d see it make a few inroads here and there in later versions of Windows, and moreso now that Microsoft has been sort of forced to accept it, but still, reflect on its legacy. What is the software market like in Windows compared to Linux, even today? Now it is, finally, time to talk about connectivity!

Getting On-Line What does it even mean to get on line? Certainly not connecting to a wifi access point. The answer is, unsurprisingly, complex. But for everyone except the large institutional users, it begins with a telephone.

The telephone system By the 80s, there was one communication network that already reached into nearly every home in America: the phone system. Virtually every household (note I don t say every person) was uniquely identified by a 10-digit phone number. You could, at least in theory, call up virtually any other phone in the country and be connected in less than a minute. But I ve got to talk about cost. The way things worked in the USA, you paid a monthly fee for a phone line. Included in that monthly fee was unlimited local calling. What is a local call? That was an extremely complex question. Generally it meant, roughly, calling within your city. But of course, as you deal with things like suburbs and cities growing into each other (eg, the Dallas-Ft. Worth metroplex), things got complicated fast. But let s just say for simplicity you could call others in your city. What about calling people not in your city? That was long distance , and you paid often hugely by the minute for it. Long distance rates were difficult to figure out, but were generally most expensive during business hours and cheapest at night or on weekends. Prices eventually started to come down when competition was introduced for long distance carriers, but even then you often were stuck with a single carrier for long distance calls outside your city but within your state. Anyhow, let s just leave it at this: local calls were virtually free, and long distance calls were extremely expensive.

Getting a modem I remember getting a modem that ran at either 1200bps or 2400bps. Either way, quite slow; you could often read even plain text faster than the modem could display it. But what was a modem? A modem hooked up to a computer with a serial cable, and to the phone system. By the time I got one, modems could automatically dial and answer. You would send a command like ATDT5551212 and it would dial 555-1212. Modems had speakers, because often things wouldn t work right, and the telephone system was oriented around speech, so you could hear what was happening. You d hear it wait for dial tone, then dial, then hopefully the remote end would ring, a modem there would answer, you d hear the screeching of a handshake, and eventually your terminal would say CONNECT 2400. Now your computer was bridged to the other; anything going out your serial port was encoded as sound by your modem and decoded at the other end, and vice-versa. But what, exactly, was the other end? It might have been another person at their computer. Turn on local echo, and you can see what they did. Maybe you d send files to each other. But in my case, the answer was different: PC Magazine.

PC Magazine and CompuServe Starting around 1986 (so I would have been about 6 years old), I got to read PC Magazine. My dad would bring copies that were being discarded at his office home for me to read, and I think eventually bought me a subscription directly. This was not just a standard magazine; it ran something like 350-400 pages an issue, and came out every other week. This thing was a monster. It had reviews of hardware and software, descriptions of upcoming technologies, pages and pages of ads (that often had some degree of being informative to them). And they had sections on programming. Many issues would talk about BASIC or Pascal programming, and there d be a utility in most issues. What do I mean by a utility in most issues ? Did they include a floppy disk with software? No, of course not. There was a literal program listing printed in the magazine. If you wanted the utility, you had to type it in. And a lot of them were written in assembler, so you had to have an assembler. An assembler, of course, was not free and I didn t have one. Or maybe they wrote it in Microsoft C, and I had Borland C, and (of course) they weren t compatible. Sometimes they would list the program sort of in binary: line after line of a BASIC program, with lines like 64, 193, 253, 0, 53, 0, 87 that you would type in for hours, hopefully correctly. Running the BASIC program would, if you got it correct, emit a .COM file that you could then run. They did have a rudimentary checksum system built in, but it wasn t even a CRC, so something like swapping two numbers you d never notice except when the program would mysteriously hang. Eventually they teamed up with CompuServe to offer a limited slice of CompuServe for the purpose of downloading PC Magazine utilities. This was called PC MagNet. I am foggy on the details, but I believe that for a time you could connect to the limited PC MagNet part of CompuServe for free (after the cost of the long-distance call, that is) rather than paying for CompuServe itself (because, OF COURSE, that also charged you per the minute.) So in the early days, I would get special permission from my parents to place a long distance call, and after some nerve-wracking minutes in which we were aware every minute was racking up charges, I could navigate the menus, download what I wanted, and log off immediately. I still, incidentally, mourn what PC Magazine became. As with computing generally, it followed the mass market. It lost its deep technical chops, cut its programming columns, stopped talking about things like how SCSI worked, and so forth. By the time it stopped printing in 2009, it was no longer a square-bound 400-page beheamoth, but rather looked more like a copy of Newsweek, but with less depth.

Continuing with CompuServe CompuServe was a much larger service than just PC MagNet. Eventually, our family got a subscription. It was still an expensive and scarce resource; I d call it only after hours when the long-distance rates were cheapest. Everyone had a numerical username separated by commas; mine was 71510,1421. CompuServe had forums, and files. Eventually I would use TapCIS to queue up things I wanted to do offline, to minimize phone usage online. CompuServe eventually added a gateway to the Internet. For the sum of somewhere around $1 a message, you could send or receive an email from someone with an Internet email address! I remember the thrill of one time, as a kid of probably 11 years, sending a message to one of the editors of PC Magazine and getting a kind, if brief, reply back! But inevitably I had

The Godzilla Phone Bill Yes, one month I became lax in tracking my time online. I ran up my parents phone bill. I don t remember how high, but I remember it was hundreds of dollars, a hefty sum at the time. As I watched Jason Scott s BBS Documentary, I realized how common an experience this was. I think this was the end of CompuServe for me for awhile.

Toll-Free Numbers I lived near a town with a population of 500. Not even IN town, but near town. The calling area included another town with a population of maybe 1500, so all told, there were maybe 2000 people total I could talk to with a local call though far fewer numbers, because remember, telephones were allocated by the household. There was, as far as I know, zero modems that were a local call (aside from one that belonged to a friend I met in around 1992). So basically everything was long-distance. But there was a special feature of the telephone network: toll-free numbers. Normally when calling long-distance, you, the caller, paid the bill. But with a toll-free number, beginning with 1-800, the recipient paid the bill. These numbers almost inevitably belonged to corporations that wanted to make it easy for people to call. Sales and ordering lines, for instance. Some of these companies started to set up modems on toll-free numbers. There were few of these, but they existed, so of course I had to try them! One of them was a company called PennyWise that sold office supplies. They had a toll-free line you could call with a modem to order stuff. Yes, online ordering before the web! I loved office supplies. And, because I lived far from a big city, if the local K-Mart didn t have it, I probably couldn t get it. Of course, the interface was entirely text, but you could search for products and place orders with the modem. I had loads of fun exploring the system, and actually ordered things from them and probably actually saved money doing so. With the first order they shipped a monster full-color catalog. That thing must have been 500 pages, like the Sears catalogs of the day. Every item had a part number, which streamlined ordering through the modem.

Inbound FAXes By the 90s, a number of modems became able to send and receive FAXes as well. For those that don t know, a FAX machine was essentially a special modem. It would scan a page and digitally transmit it over the phone system, where it would at least in the early days be printed out in real time (because the machines didn t have the memory to store an entire page as an image). Eventually, PC modems integrated FAX capabilities. There still wasn t anything useful I could do locally, but there were ways I could get other companies to FAX something to me. I remember two of them. One was for US Robotics. They had an on demand FAX system. You d call up a toll-free number, which was an automated IVR system. You could navigate through it and select various documents of interest to you: spec sheets and the like. You d key in your FAX number, hang up, and US Robotics would call YOU and FAX you the documents you wanted. Yes! I was talking to a computer (of a sorts) at no cost to me! The New York Times also ran a service for awhile called TimesFax. Every day, they would FAX out a page or two of summaries of the day s top stories. This was pretty cool in an era in which I had no other way to access anything from the New York Times. I managed to sign up for TimesFax I have no idea how, anymore and for awhile I would get a daily FAX of their top stories. When my family got its first laser printer, I could them even print these FAXes complete with the gothic New York Times masthead. Wow! (OK, so technically I could print it on a dot-matrix printer also, but graphics on a 9-pin dot matrix is a kind of pain that is a whole other article.)

My own phone line Remember how I discussed that phone lines were allocated per household? This was a problem for a lot of reasons:
  1. Anybody that tried to call my family while I was using my modem would get a busy signal (unable to complete the call)
  2. If anybody in the house picked up the phone while I was using it, that would degrade the quality of the ongoing call and either mess up or disconnect the call in progress. In many cases, that could cancel a file transfer (which wasn t necessarily easy or possible to resume), prompting howls of annoyance from me.
  3. Generally we all had to work around each other
So eventually I found various small jobs and used the money I made to pay for my own phone line and my own long distance costs. Eventually I upgraded to a 28.8Kbps US Robotics Courier modem even! Yes, you heard it right: I got a job and a bank account so I could have a phone line and a faster modem. Uh, isn t that why every teenager gets a job? Now my local friend and I could call each other freely at least on my end (I can t remember if he had his own phone line too). We could exchange files using HS/Link, which had the added benefit of allowing split-screen chat even while a file transfer is in progress. I m sure we spent hours chatting to each other keyboard-to-keyboard while sharing files with each other.

Technology in Schools By this point in the story, we re in the late 80s and early 90s. I m still using PC-style OSs at home; OS/2 in the later years of this period, DOS or maybe a bit of Windows in the earlier years. I mentioned that they let me work on programming at school starting in 5th grade. It was soon apparent that I knew more about computers than anybody on staff, and I started getting pulled out of class to help teachers or administrators with vexing school problems. This continued until I graduated from high school, incidentally often to my enjoyment, and the annoyance of one particular teacher who, I must say, I was fine with annoying in this way. That s not to say that there was institutional support for what I was doing. It was, after all, a small school. Larger schools might have introduced BASIC or maybe Logo in high school. But I had already taught myself BASIC, Pascal, and C by the time I was somewhere around 12 years old. So I wouldn t have had any use for that anyhow. There were programming contests occasionally held in the area. Schools would send teams. My school didn t really send anybody, but I went as an individual. One of them was run by a local college (but for jr. high or high school students. Years later, I met one of the professors that ran it. He remembered me, and that day, better than I did. The programming contest had problems one could solve in BASIC or Logo. I knew nothing about what to expect going into it, but I had lugged my computer and screen along, and asked him, Can I write my solutions in C? He was, apparently, stunned, but said sure, go for it. I took first place that day, leading to some rather confused teams from much larger schools. The Netware network that the school had was, as these generally were, itself isolated. There was no link to the Internet or anything like it. Several schools across three local counties eventually invested in a fiber-optic network linking them together. This built a larger, but still closed, network. Its primary purpose was to allow students to be exposed to a wider variety of classes at high schools. Participating schools had an ITV room , outfitted with cameras and mics. So students at any school could take classes offered over ITV at other schools. For instance, only my school taught German classes, so people at any of those participating schools could take German. It was an early Zoom room. But alongside the TV signal, there was enough bandwidth to run some Netware frames. By about 1995 or so, this let one of the schools purchase some CD-ROM software that was made available on a file server and could be accessed by any participating school. Nice! But Netware was mainly about file and printer sharing; there wasn t even a facility like email, at least not on our deployment.

BBSs My last hop before the Internet was the BBS. A BBS was a computer program, usually ran by a hobbyist like me, on a computer with a modem connected. Callers would call it up, and they d interact with the BBS. Most BBSs had discussion groups like forums and file areas. Some also had games. I, of course, continued to have that most vexing of problems: they were all long-distance. There were some ways to help with that, chiefly QWK and BlueWave. These, somewhat like TapCIS in the CompuServe days, let me download new message posts for reading offline, and queue up my own messages to send later. QWK and BlueWave didn t help with file downloading, though.

BBSs get networked BBSs were an interesting thing. You d call up one, and inevitably somewhere in the file area would be a BBS list. Download the BBS list and you ve suddenly got a list of phone numbers to try calling. All of them were long distance, of course. You d try calling them at random and have a success rate of maybe 20%. The other 80% would be defunct; you might get the dreaded this number is no longer in service or the even more dreaded angry human answering the phone (and of course a modem can t talk to a human, so they d just get silence for probably the nth time that week). The phone company cared nothing about BBSs and recycled their numbers just as fast as any others. To talk to various people, or participate in certain discussion groups, you d have to call specific BBSs. That s annoying enough in the general case, but even more so for someone paying long distance for it all, because it takes a few minutes to establish a connection to a BBS: handshaking, logging in, menu navigation, etc. But BBSs started talking to each other. The earliest successful such effort was FidoNet, and for the duration of the BBS era, it remained by far the largest. FidoNet was analogous to the UUCP that the institutional users had, but ran on the much cheaper PC hardware. Basically, BBSs that participated in FidoNet would relay email, forum posts, and files between themselves overnight. Eventually, as with UUCP, by hopping through this network, messages could reach around the globe, and forums could have worldwide participation asynchronously, long before they could link to each other directly via the Internet. It was almost entirely volunteer-run.

Running my own BBS At age 13, I eventually chose to set up my own BBS. It ran on my single phone line, so of course when I was dialing up something else, nobody could dial up me. Not that this was a huge problem; in my town of 500, I probably had a good 1 or 2 regular callers in the beginning. In the PC era, there was a big difference between a server and a client. Server-class software was expensive and rare. Maybe in later years you had an email client, but an email server would be completely unavailable to you as a home user. But with a BBS, I could effectively run a server. I even ran serial lines in our house so that the BBS could be connected from other rooms! Since I was running OS/2, the BBS didn t tie up the computer; I could continue using it for other things. FidoNet had an Internet email gateway. This one, unlike CompuServe s, was free. Once I had a BBS on FidoNet, you could reach me from the Internet using the FidoNet address. This didn t support attachments, but then email of the day didn t really, either. Various others outside Kansas ran FidoNet distribution points. I believe one of them was mgmtsys; my memory is quite vague, but I think they offered a direct gateway and I would call them to pick up Internet mail via FidoNet protocols, but I m not at all certain of this.

Pros and Cons of the Non-Microsoft World As mentioned, Microsoft was and is the dominant operating system vendor for PCs. But I left that world in 1993, and here, nearly 30 years later, have never really returned. I got an operating system with more technical capabilities than the DOS and Windows of the day, but the tradeoff was a much smaller software ecosystem. OS/2 could run DOS programs, but it ran OS/2 programs a lot better. So if I were to run a BBS, I wanted one that had a native OS/2 version limiting me to a small fraction of available BBS server software. On the other hand, as a fully 32-bit operating system, there started to be OS/2 ports of certain software with a Unix heritage; most notably for me at the time, gcc. At some point, I eventually came across the RMS essays and started to be hooked.

Internet: The Hunt Begins I certainly was aware that the Internet was out there and interesting. But the first problem was: how the heck do I get connected to the Internet?

Computer labs There was one place that tended to have Internet access: colleges and universities. In 7th grade, I participated in a program that resulted in me being invited to visit Duke University, and in 8th grade, I participated in National History Day, resulting in a trip to visit the University of Maryland. I probably sought out computer labs at both of those. My most distinct memory was finding my way into a computer lab at one of those universities, and it was full of NeXT workstations. I had never seen or used NeXT before, and had no idea how to operate it. I had brought a box of floppy disks, unaware that the DOS disks probably weren t compatible with NeXT. Closer to home, a small college had a computer lab that I could also visit. I would go there in summer or when it wasn t used with my stack of floppies. I remember downloading disk images of FLOSS operating systems: FreeBSD, Slackware, or Debian, at the time. The hash marks from the DOS-based FTP client would creep across the screen as the 1.44MB disk images would slowly download. telnet was also available on those machines, so I could telnet to things like public-access Archie servers and libraries though not Gopher. Still, FTP and telnet access opened up a lot, and I learned quite a bit in those years.

Continuing the Journey At some point, I got a copy of the Whole Internet User s Guide and Catalog, published in 1994. I still have it. If it hadn t already figured it out by then, I certainly became aware from it that Unix was the dominant operating system on the Internet. The examples in Whole Internet covered FTP, telnet, gopher all assuming the user somehow got to a Unix prompt. The web was introduced about 300 pages in; clearly viewed as something that wasn t page 1 material. And it covered the command-line www client before introducing the graphical Mosaic. Even then, though, the book highlighted Mosaic s utility as a front-end for Gopher and FTP, and even the ability to launch telnet sessions by clicking on links. But having a copy of the book didn t equate to having any way to run Mosaic. The machines in the computer lab I mentioned above all ran DOS and were incapable of running a graphical browser. I had no SLIP or PPP (both ways to run Internet traffic over a modem) connectivity at home. In short, the Web was something for the large institutional users at the time.

CD-ROMs As CD-ROMs came out, with their huge (for the day) 650MB capacity, various companies started collecting software that could be downloaded on the Internet and selling it on CD-ROM. The two most popular ones were Walnut Creek CD-ROM and Infomagic. One could buy extensive Shareware and gaming collections, and then even entire Linux and BSD distributions. Although not exactly an Internet service per se, it was a way of bringing what may ordinarily only be accessible to institutional users into the home computer realm.

Free Software Jumps In As I mentioned, by the mid 90s, I had come across RMS s writings about free software most probably his 1992 essay Why Software Should Be Free. (Please note, this is not a commentary on the more recently-revealed issues surrounding RMS, but rather his writings and work as I encountered them in the 90s.) The notion of a Free operating system not just in cost but in openness was incredibly appealing. Not only could I tinker with it to a much greater extent due to having source for everything, but it included so much software that I d otherwise have to pay for. Compilers! Interpreters! Editors! Terminal emulators! And, especially, server software of all sorts. There d be no way I could afford or run Netware, but with a Free Unixy operating system, I could do all that. My interest was obviously piqued. Add to that the fact that I could actually participate and contribute I was about to become hooked on something that I ve stayed hooked on for decades. But then the question was: which Free operating system? Eventually I chose FreeBSD to begin with; that would have been sometime in 1995. I don t recall the exact reasons for that. I remember downloading Slackware install floppies, and probably the fact that Debian wasn t yet at 1.0 scared me off for a time. FreeBSD s fantastic Handbook far better than anything I could find for Linux at the time was no doubt also a factor.

The de Raadt Factor Why not NetBSD or OpenBSD? The short answer is Theo de Raadt. Somewhere in this time, when I was somewhere between 14 and 16 years old, I asked some questions comparing NetBSD to the other two free BSDs. This was on a NetBSD mailing list, but for some reason Theo saw it and got a flame war going, which CC d me. Now keep in mind that even if NetBSD had a web presence at the time, it would have been minimal, and I would have not all that unusually for the time had no way to access it. I was certainly not aware of the, shall we say, acrimony between Theo and NetBSD. While I had certainly seen an online flamewar before, this took on a different and more disturbing tone; months later, Theo randomly emailed me under the subject SLIME saying that I was, well, SLIME . I seem to recall periodic emails from him thereafter reminding me that he hates me and that he had blocked me. (Disclaimer: I have poor email archives from this period, so the full details are lost to me, but I believe I am accurately conveying these events from over 25 years ago) This was a surprise, and an unpleasant one. I was trying to learn, and while it is possible I didn t understand some aspect or other of netiquette (or Theo s personal hatred of NetBSD) at the time, still that is not a reason to flame a 16-year-old (though he would have had no way to know my age). This didn t leave any kind of scar, but did leave a lasting impression; to this day, I am particularly concerned with how FLOSS projects handle poisonous people. Debian, for instance, has come a long way in this over the years, and even Linus Torvalds has turned over a new leaf. I don t know if Theo has. In any case, I didn t use NetBSD then. I did try it periodically in the years since, but never found it compelling enough to justify a large switch from Debian. I never tried OpenBSD for various reasons, but one of them was that I didn t want to join a community that tolerates behavior such as Theo s from its leader.

Moving to FreeBSD Moving from OS/2 to FreeBSD was final. That is, I didn t have enough hard drive space to keep both. I also didn t have the backup capacity to back up OS/2 completely. My BBS, which ran Virtual BBS (and at some point also AdeptXBBS) was deleted and reincarnated in a different form. My BBS was a member of both FidoNet and VirtualNet; the latter was specific to VBBS, and had to be dropped. I believe I may have also had to drop the FidoNet link for a time. This was the biggest change of computing in my life to that point. The earlier experiences hadn t literally destroyed what came before. OS/2 could still run my DOS programs. Its command shell was quite DOS-like. It ran Windows programs. I was going to throw all that away and leap into the unknown. I wish I had saved a copy of my BBS; I would love to see the messages I exchanged back then, or see its menu screens again. I have little memory of what it looked like. But other than that, I have no regrets. Pursuing Free, Unixy operating systems brought me a lot of enjoyment and a good career. That s not to say it was easy. All the problems of not being in the Microsoft ecosystem were magnified under FreeBSD and Linux. In a day before EDID, monitor timings had to be calculated manually and you risked destroying your monitor if you got them wrong. Word processing and spreadsheet software was pretty much not there for FreeBSD or Linux at the time; I was therefore forced to learn LaTeX and actually appreciated that. Software like PageMaker or CorelDraw was certainly nowhere to be found for those free operating systems either. But I got a ton of new capabilities. I mentioned the BBS didn t shut down, and indeed it didn t. I ran what was surely a supremely unique oddity: a free, dialin Unix shell server in the middle of a small town in Kansas. I m sure I provided things such as pine for email and some help text and maybe even printouts for how to use it. The set of callers slowly grew over the time period, in fact. And then I got UUCP.

Enter UUCP Even throughout all this, there was no local Internet provider and things were still long distance. I had Internet Email access via assorted strange routes, but they were all strange. And, I wanted access to Usenet. In 1995, it happened. The local ISP I mentioned offered UUCP access. Though I couldn t afford the dialup shell (or later, SLIP/PPP) that they offered due to long-distance costs, UUCP s very efficient batched processes looked doable. I believe I established that link when I was 15, so in 1995. I worked to register my domain, complete.org, as well. At the time, the process was a bit lengthy and involved downloading a text file form, filling it out in a precise way, sending it to InterNIC, and probably mailing them a check. Well I did that, and in September of 1995, complete.org became mine. I set up sendmail on my local system, as well as INN to handle the limited Usenet newsfeed I requested from the ISP. I even ran Majordomo to host some mailing lists, including some that were surprisingly high-traffic for a few-times-a-day long-distance modem UUCP link! The modem client programs for FreeBSD were somewhat less advanced than for OS/2, but I believe I wound up using Minicom or Seyon to continue to dial out to BBSs and, I believe, continue to use Learning Link. So all the while I was setting up my local BBS, I continued to have access to the text Internet, consisting of chiefly Gopher for me.

Switching to Debian I switched to Debian sometime in 1995 or 1996, and have been using Debian as my primary OS ever since. I continued to offer shell access, but added the WorldVU Atlantis menuing BBS system. This provided a return of a more BBS-like interface (by default; shell was still an uption) as well as some BBS door games such as LoRD and TradeWars 2002, running under DOS emulation. I also continued to run INN, and ran ifgate to allow FidoNet echomail to be presented into INN Usenet-like newsgroups, and netmail to be gated to Unix email. This worked pretty well. The BBS continued to grow in these days, peaking at about two dozen total user accounts, and maybe a dozen regular users.

Dial-up access availability I believe it was in 1996 that dial up PPP access finally became available in my small town. What a thrill! FINALLY! I could now FTP, use Gopher, telnet, and the web all from home. Of course, it was at modem speeds, but still. (Strangely, I have a memory of accessing the Web using WebExplorer from OS/2. I don t know exactly why; it s possible that by this time, I had upgraded to a 486 DX2/66 and was able to reinstall OS/2 on the old 25MHz 486, or maybe something was wrong with the timeline from my memories from 25 years ago above. Or perhaps I made the occasional long-distance call somewhere before I ditched OS/2.) Gopher sites still existed at this point, and I could access them using Netscape Navigator which likely became my standard Gopher client at that point. I don t recall using UMN text-mode gopher client locally at that time, though it s certainly possible I did.

The city Starting when I was 15, I took computer science classes at Wichita State University. The first one was a class in the summer of 1995 on C++. I remember being worried about being good enough for it I was, after all, just after my HS freshman year and had never taken the prerequisite C class. I loved it and got an A! By 1996, I was taking more classes. In 1996 or 1997 I stayed in Wichita during the day due to having more than one class. So, what would I do then but enjoy the computer lab? The CS dept. had two of them: one that had NCD X terminals connected to a pair of SunOS servers, and another one running Windows. I spent most of the time in the Unix lab with the NCDs; I d use Netscape or pine, write code, enjoy the University s fast Internet connection, and so forth. In 1997 I had graduated high school and that summer I moved to Wichita to attend college. As was so often the case, I shut down the BBS at that time. It would be 5 years until I again dealt with Internet at home in a rural community. By the time I moved to my apartment in Wichita, I had stopped using OS/2 entirely. I have no memory of ever having OS/2 there. Along the way, I had bought a Pentium 166, and then the most expensive piece of computing equipment I have ever owned: a DEC Alpha, which, of course, ran Linux.

ISDN I must have used dialup PPP for a time, but I eventually got a job working for the ISP I had used for UUCP, and then PPP. While there, I got a 128Kbps ISDN line installed in my apartment, and they gave me a discount on the service for it. That was around 3x the speed of a modem, and crucially was always on and gave me a public IP. No longer did I have to use UUCP; now I got to host my own things! By at least 1998, I was running a web server on www.complete.org, and I had an FTP server going as well.

Even Bigger Cities In 1999 I moved to Dallas, and there got my first broadband connection: an ADSL link at, I think, 1.5Mbps! Now that was something! But it had some reliability problems. I eventually put together a server and had it hosted at an acquantaince s place who had SDSL in his apartment. Within a couple of years, I had switched to various kinds of proper hosting for it, but that is a whole other article. In Indianapolis, I got a cable modem for the first time, with even tighter speeds but prohibitions on running servers on it. Yuck.

Challenges Being non-Microsoft continued to have challenges. Until the advent of Firefox, a web browser was one of the biggest. While Netscape supported Linux on i386, it didn t support Linux on Alpha. I hobbled along with various attempts at emulators, old versions of Mosaic, and so forth. And, until StarOffice was open-sourced as Open Office, reading Microsoft file formats was also a challenge, though WordPerfect was briefly available for Linux. Over the years, I have become used to the Linux ecosystem. Perhaps I use Gimp instead of Photoshop and digikam instead of well, whatever somebody would use on Windows. But I get ZFS, and containers, and so much that isn t available there. Yes, I know Apple never went away and is a thing, but for most of the time period I discuss in this article, at least after the rise of DOS, it was niche compared to the PC market.

Back to Kansas In 2002, I moved back to Kansas, to a rural home near a different small town in the county next to where I grew up. Over there, it was back to dialup at home, but I had faster access at work. I didn t much care for this, and thus began a 20+-year effort to get broadband in the country. At first, I got a wireless link, which worked well enough in the winter, but had serious problems in the summer when the trees leafed out. Eventually DSL became available locally highly unreliable, but still, it was something. Then I moved back to the community I grew up in, a few miles from where I grew up. Again I got DSL a bit better. But after some years, being at the end of the run of DSL meant I had poor speeds and reliability problems. I eventually switched to various wireless ISPs, which continues to the present day; while people in cities can get Gbps service, I can get, at best, about 50Mbps. Long-distance fees are gone, but the speed disparity remains.

Concluding Reflections I am glad I grew up where I did; the strong community has a lot of advantages I don t have room to discuss here. In a number of very real senses, having no local services made things a lot more difficult than they otherwise would have been. However, perhaps I could say that I also learned a lot through the need to come up with inventive solutions to those challenges. To this day, I think a lot about computing in remote environments: partially because I live in one, and partially because I enjoy visiting places that are remote enough that they have no Internet, phone, or cell service whatsoever. I have written articles like Tools for Communicating Offline and in Difficult Circumstances based on my own personal experience. I instinctively think about making protocols robust in the face of various kinds of connectivity failures because I experience various kinds of connectivity failures myself.

(Almost) Everything Lives On In 2002, Gopher turned 10 years old. It had probably been about 9 or 10 years since I had first used Gopher, which was the first way I got on live Internet from my house. It was hard to believe. By that point, I had an always-on Internet link at home and at work. I had my Alpha, and probably also at least PCMCIA Ethernet for a laptop (many laptops had modems by the 90s also). Despite its popularity in the early 90s, less than 10 years after it came on the scene and started to unify the Internet, it was mostly forgotten. And it was at that moment that I decided to try to resurrect it. The University of Minnesota finally released it under an Open Source license. I wrote the first new gopher server in years, pygopherd, and introduced gopher to Debian. Gopher lives on; there are now quite a few Gopher clients and servers out there, newly started post-2002. The Gemini protocol can be thought of as something akin to Gopher 2.0, and it too has a small but blossoming ecosystem. Archie, the old FTP search tool, is dead though. Same for WAIS and a number of the other pre-web search tools. But still, even FTP lives on today. And BBSs? Well, they didn t go away either. Jason Scott s fabulous BBS documentary looks back at the history of the BBS, while Back to the BBS from last year talks about the modern BBS scene. FidoNet somehow is still alive and kicking. UUCP still has its place and has inspired a whole string of successors. Some, like NNCP, are clearly direct descendents of UUCP. Filespooler lives in that ecosystem, and you can even see UUCP concepts in projects as far afield as Syncthing and Meshtastic. Usenet still exists, and you can now run Usenet over NNCP just as I ran Usenet over UUCP back in the day (which you can still do as well). Telnet, of course, has been largely supplanted by ssh, but the concept is more popular now than ever, as Linux has made ssh be available on everything from Raspberry Pi to Android. And I still run a Gopher server, looking pretty much like it did in 2002. This post also has a permanent home on my website, where it may be periodically updated.

8 August 2022

Adnan Hodzic: atuf.app: Amsterdam Toilet & Urinal Finder

Amsterdam is a great city to enjoy a beer, or two. However, after you had your beers and you re strolling down the streets, you might... The post atuf.app: Amsterdam Toilet & Urinal Finder appeared first on FoolControl: Phear the penguin.

29 June 2022

Aigars Mahinovs: Long travel in an electric car

Since the first week of April 2022 I have (finally!) changed my company car from a plug-in hybrid to a fully electic car. My new ride, for the next two years, is a BMW i4 M50 in Aventurine Red metallic. An ellegant car with very deep and memorable color, insanely powerful (544 hp/795 Nm), sub-4 second 0-100 km/h, large 84 kWh battery (80 kWh usable), charging up to 210 kW, top speed of 225 km/h and also very efficient (which came out best in this trip) with WLTP range of 510 km and EVDB real range of 435 km. The car also has performance tyres (Hankook Ventus S1 evo3 245/45R18 100Y XL in front and 255/45R18 103Y XL in rear all at recommended 2.5 bar) that have reduced efficiency. So I wanted to document and describe how was it for me to travel ~2000 km (one way) with this, electric, car from south of Germany to north of Latvia. I have done this trip many times before since I live in Germany now and travel back to my relatives in Latvia 1-2 times per year. This was the first time I made this trip in an electric car. And as this trip includes both travelling in Germany (where BEV infrastructure is best in the world) and across Eastern/Northen Europe, I believe that this can be interesting to a few people out there. Normally when I travelled this trip with a gasoline/diesel car I would normally drive for two days with an intermediate stop somewhere around Warsaw with about 12 hours of travel time in each day. This would normally include a couple bathroom stops in each day, at least one longer lunch stop and 3-4 refueling stops on top of that. Normally this would use at least 6 liters of fuel per 100 km on average with total usage of about 270 liters for the whole trip (or about 540 just in fuel costs, nowadays). My (personal) quirk is that both fuel and recharging of my (business) car inside Germany is actually paid by my employer, so it is useful for me to charge up (or fill up) at the last station in Gemany before driving on. The plan for this trip was made in a similar way as when travelling with a gasoline car: travelling as fast as possible on German Autobahn network to last chargin stop on the A4 near G rlitz, there charging up as much as reasonable and then travelling to a hotel in Warsaw, charging there overnight and travelling north towards Ionity chargers in Lithuania from where reaching the final target in north of Latvia should be possible. How did this plan meet the reality? Travelling inside Germany with an electric car was basically perfect. The most efficient way would involve driving fast and hard with top speed of even 180 km/h (where possible due to speed limits and traffic). BMW i4 is very efficient at high speeds with consumption maxing out at 28 kWh/100km when you actually drive at this speed all the time. In real situation in this trip we saw consumption of 20.8-22.2 kWh/100km in the first legs of the trip. The more traffic there is, the more speed limits and roadworks, the lower is the average speed and also the lower the consumption. With this kind of consumption we could comfortably drive 2 hours as fast as we could and then pick any fast charger along the route and in 26 minutes at a charger (50 kWh charged total) we'd be ready to drive for another 2 hours. This lines up very well with recommended rest stops for biological reasons (bathroom, water or coffee, a bit of movement to get blood circulating) and very close to what I had to do anyway with a gasoline car. With a gasoline car I had to refuel first, then park, then go to bathroom and so on. With an electric car I can do all of that while the car is charging and in the end the total time for a stop is very similar. Also not that there was a crazy heat wave going on and temperature outside was at about 34C minimum the whole day and hitting 40C at one point of the trip, so a lot of power was used for cooling. The car has a heat pump standard, but it still was working hard to keep us cool in the sun. The car was able to plan a charging route with all the charging stops required and had all the good options (like multiple intermediate stops) that many other cars (hi Tesla) and mobile apps (hi Google and Apple) do not have yet. There are a couple bugs with charging route and display of current route guidance, those are already fixed and will be delivered with over the air update with July 2022 update. Another good alterantive is the ABRP (A Better Route Planner) that was specifically designed for electric car routing along the best route for charging. Most phone apps (like Google Maps) have no idea about your specific electric car - it has no idea about the battery capacity, charging curve and is missing key live data as well - what is the current consumption and remaining energy in the battery. ABRP is different - it has data and profiles for almost all electric cars and can also be linked to live vehicle data, either via a OBD dongle or via a new Tronity cloud service. Tronity reads data from vehicle-specific cloud service, such as MyBMW service, saves it, tracks history and also re-transmits it to ABRP for live navigation planning. ABRP allows for options and settings that no car or app offers, for example, saying that you want to stop at a particular place for an hour or until battery is charged to 90%, or saying that you have specific charging cards and would only want to stop at chargers that support those. Both the car and the ABRP also support alternate routes even with multiple intermediate stops. In comparison, route planning by Google Maps or Apple Maps or Waze or even Tesla does not really come close. After charging up in the last German fast charger, a more interesting part of the trip started. In Poland the density of high performance chargers (HPC) is much lower than in Germany. There are many chargers (west of Warsaw), but vast majority of them are (relatively) slow 50kW chargers. And that is a difference between putting 50kWh into the car in 23-26 minutes or in 60 minutes. It does not seem too much, but the key bit here is that for 20 minutes there is easy to find stuff that should be done anyway, but after that you are done and you are just waiting for the car and if that takes 4 more minutes or 40 more minutes is a big, perceptual, difference. So using HPC is much, much preferable. So we put in the Ionity charger near Lodz as our intermediate target and the car suggested an intermediate stop at a Greenway charger by Katy Wroclawskie. The location is a bit weird - it has 4 charging stations with 150 kW each. The weird bits are that each station has two CCS connectors, but only one parking place (and the connectors share power, so if two cars were to connect, each would get half power). Also from the front of the location one can only see two stations, the otehr two are semi-hidden around a corner. We actually missed them on the way to Latvia and one person actually waited for the charger behind us for about 10 minutes. We only discovered the other two stations on the way back. With slower speeds in Poland the consumption goes down to 18 kWh/100km which translates to now up to 3 hours driving between stops. At the end of the first day we drove istarting from Ulm from 9:30 in the morning until about 23:00 in the evening with total distance of about 1100 km, 5 charging stops, starting with 92% battery, charging for 26 min (50 kWh), 33 min (57 kWh + lunch), 17 min (23 kWh), 12 min (17 kWh) and 13 min (37 kW). In the last two chargers you can see the difference between a good and fast 150 kW charger at high battery charge level and a really fast Ionity charger at low battery charge level, which makes charging faster still. Arriving to hotel with 23% of battery. Overnight the car charged from a Porsche Destination Charger to 87% (57 kWh). That was a bit less than I would expect from a full power 11kW charger, but good enough. Hotels should really install 11kW Type2 chargers for their guests, it is a really significant bonus that drives more clients to you. The road between Warsaw and Kaunas is the most difficult part of the trip for both driving itself and also for charging. For driving the problem is that there will be a new highway going from Warsaw to Lithuanian border, but it is actually not fully ready yet. So parts of the way one drives on the new, great and wide highway and parts of the way one drives on temporary roads or on old single lane undivided roads. And the most annoying part is navigating between parts as signs are not always clear and the maps are either too old or too new. Some maps do not have the new roads and others have on the roads that have not been actually build or opened to traffic yet. It's really easy to loose ones way and take a significant detour. As far as charging goes, basically there is only the slow 50 kW chargers between Warsaw and Kaunas (for now). We chose to charge on the last charger in Poland, by Suwalki Kaufland. That was not a good idea - there is only one 50 kW CCS and many people decide the same, so there can be a wait. We had to wait 17 minutes before we could charge for 30 more minutes just to get 18 kWh into the battery. Not the best use of time. On the way back we chose a different charger in Lomza where would have a relaxed dinner while the car was charging. That was far more relaxing and a better use of time. We also tried charging at an Orlen charger that was not recommended by our car and we found out why. Unlike all other chargers during our entire trip, this charger did not accept our universal BMW Charging RFID card. Instead it demanded that we download their own Orlen app and register there. The app is only available in some countries (and not in others) and on iPhone it is only available in Polish. That is a bad exception to the rule and a bad example. This is also how most charging works in USA. Here in Europe that is not normal. The normal is to use a charging card - either provided from the car maker or from another supplier (like PlugSufring or Maingau Energy). The providers then make roaming arrangements with all the charging networks, so the cards just work everywhere. In the end the user gets the prices and the bills from their card provider as a single monthly bill. This also saves all any credit card charges for the user. Having a clear, separate RFID card also means that one can easily choose how to pay for each charging session. For example, I have a corporate RFID card that my company pays for (for charging in Germany) and a private BMW Charging card that I am paying myself for (for charging abroad). Having the car itself authenticate direct with the charger (like Tesla does) removes the option to choose how to pay. Having each charge network have to use their own app or token bring too much chaos and takes too much setup. The optimum is having one card that works everywhere and having the option to have additional card or cards for specific purposes. Reaching Ionity chargers in Lithuania is again a breath of fresh air - 20-24 minutes to charge 50 kWh is as expected. One can charge on the first Ionity just enough to reach the next one and then on the second charger one can charge up enough to either reach the Ionity charger in Adazi or the final target in Latvia. There is a huge number of CSDD (Road Traffic and Safety Directorate) managed chargers all over Latvia, but they are 50 kW chargers. Good enough for local travel, but not great for long distance trips. BMW i4 charges at over 50 kW on a HPC even at over 90% battery state of charge (SoC). This means that it is always faster to charge up in a HPC than in a 50 kW charger, if that is at all possible. We also tested the CSDD chargers - they worked without any issues. One could pay with the BMW Charging RFID card, one could use the CSDD e-mobi app or token and one could also use Mobilly - an app that you can use in Latvia for everything from parking to public transport tickets or museums or car washes. We managed to reach our final destination near Aluksne with 17% range remaining after just 3 charging stops: 17+30 min (18 kWh), 24 min (48 kWh), 28 min (36 kWh). Last stop we charged to 90% which took a few extra minutes that would have been optimal. For travel around in Latvia we were charging at our target farmhouse from a normal 3 kW Schuko EU socket. That is very slow. We charged for 33 hours and went from 17% to 94%, so not really full. That was perfectly fine for our purposes. We easily reached Riga, drove to the sea and then back to Aluksne with 8% still in reserve and started charging again for the next trip. If it were required to drive around more and charge faster, we could have used the normal 3-phase 440V connection in the farmhouse to have a red CEE 16A plug installed (same as people use for welders). BMW i4 comes standard with a new BMW Flexible Fast Charger that has changable socket adapters. It comes by default with a Schucko connector in Europe, but for 90 one can buy an adapter for blue CEE plug (3.7 kW) or red CEE 16A or 32A plugs (11 kW). Some public charging stations in France actually use the blue CEE plugs instead of more common Type2 electric car charging stations. The CEE plugs are also common in camping parking places. On the way back the long distance BEV travel was already well understood and did not cause us any problem. From our destination we could easily reach the first Ionity in Lithuania, on the Panevezhis bypass road where in just 8 minutes we got 19 kWh and were ready to drive on to Kaunas, there a longer 32 minute stop before the charging desert of Suwalki Gap that gave us 52 kWh to 90%. That brought us to a shopping mall in Lomzha where we had some food and charged up 39 kWh in lazy 50 minutes. That was enough to bring us to our return hotel for the night - Hotel 500W in Strykow by Lodz that has a 50kW charger on site, while we were having late dinner and preparing for sleep, the car easily recharged to full (71 kWh in 95 minutes), so I just moved it from charger to a parking spot just before going to sleep. Really easy and well flowing day. Second day back went even better as we just needed an 18 minute stop at the same Katy Wroclawskie charger as before to get 22 kWh and that was enough to get back to Germany. After that we were again flying on the Autobahn and charging as needed, 15 min (31 kWh), 23 min (48 kWh) and 31 min (54 kWh + food). We started the day on about 9:40 and were home at 21:40 after driving just over 1000 km on that day. So less than 12 hours for 1000 km travelled, including all charging, bio stops, food and some traffic jams as well. Not bad. Now let's take a look at all the apps and data connections that a technically minded customer can have for their car. Architecturally the car is a network of computers by itself, but it is very secured and normally people do not have any direct access. However, once you log in into the car with your BMW account the car gets your profile info and preferences (seat settings, navigation favorites, ...) and the car then also can start sending information to the BMW backend about its status. This information is then available to the user over multiple different channels. There is no separate channel for each of those data flow. The data only goes once to the backend and then all other communication of apps happens with the backend. First of all the MyBMW app. This is the go-to for everything about the car - seeing its current status and location (when not driving), sending commands to the car (lock, unlock, flash lights, pre-condition, ...) and also monitor and control charging processes. You can also plan a route or destination in the app in advance and then just send it over to the car so it already knows where to drive to when you get to the car. This can also integrate with calendar entries, if you have locations for appointments, for example. This also shows full charging history and allows a very easy export of that data, here I exported all charging sessions from June and then trimmed it back to only sessions relevant to the trip and cut off some design elements to have the data more visible. So one can very easily see when and where we were charging, how much power we got at each spot and (if you set prices for locations) can even show costs. I've already mentioned the Tronity service and its ABRP integration, but it also saves the information that it gets from the car and gathers that data over time. It has nice aspects, like showing the driven routes on a map, having ways to do business trip accounting and having good calendar view. Sadly it does not correctly capture the data for charging sessions (the amounts are incorrect). Update: after talking to Tronity support, it looks like the bug was in the incorrect value for the usable battery capacity for my car. They will look into getting th eright values there by default, but as a workaround one can edit their car in their system (after at least one charging session) and directly set the expected battery capacity (usable) in the car properties on the Tronity web portal settings. One other fun way to see data from your BMW is using the BMW integration in Home Assistant. This brings the car as a device in your own smart home. You can read all the variables from the car current status (and Home Asisstant makes cute historical charts) and you can even see interesting trends, for example for remaining range shows much higher value in Latvia as its prediction is adapted to Latvian road speeds and during the trip it adapts to Polish and then to German road speeds and thus to higher consumption and thus lower maximum predicted remaining range. Having the car attached to the Home Assistant also allows you to attach the car to automations, both as data and event source (like detecting when car enters the "Home" zone) and also as target, so you could flash car lights or even unlock or lock it when certain conditions are met. So, what in the end was the most important thing - cost of the trip? In total we charged up 863 kWh, so that would normally cost one about 290 , which is close to half what this trip would have costed with a gasoline car. Out of that 279 kWh in Germany (paid by my employer) and 154 kWh in the farmhouse (paid by our wonderful relatives :D) so in the end the charging that I actually need to pay adds up to 430 kWh or about 150 . Typically, it took about 400 in fuel that I had to pay to get to Latvia and back. The difference is really nice! In the end I believe that there are three different ways of charging:
  • incidental charging - this is wast majority of charging in the normal day-to-day life. The car gets charged when and where it is convinient to do so along the way. If we go to a movie or a shop and there is a chance to leave the car at a charger, then it can charge up. Works really well, does not take extra time for charging from us.
  • fast charging - charging up at a HPC during optimal charging conditions - from relatively low level to no more than 70-80% while you are still doing all the normal things one would do in a quick stop in a long travel process: bio things, cleaning the windscreen, getting a coffee or a snack.
  • necessary charging - charging from a whatever charger is available just enough to be able to reach the next destination or the next fast charger.
The last category is the only one that is really annoying and should be avoided at all costs. Even by shifting your plans so that you find something else useful to do while necessary charging is happening and thus, at least partially, shifting it over to incidental charging category. Then you are no longer just waiting for the car, you are doing something else and the car magically is charged up again. And when one does that, then travelling with an electric car becomes no more annoying than travelling with a gasoline car. Having more breaks in a trip is a good thing and makes the trips actually easier and less stressfull - I was more relaxed during and after this trip than during previous trips. Having the car air conditioning always be on, even when stopped, was a godsend in the insane heat wave of 30C-38C that we were driving trough. Final stats: 4425 km driven in the trip. Average consumption: 18.7 kWh/100km. Time driving: 2 days and 3 hours. Car regened 152 kWh. Charging stations recharged 863 kWh. Questions? You can use this i4talk forum thread or this Twitter thread to ask them to me.

8 April 2022

Reproducible Builds: Reproducible Builds in March 2022

Welcome to the March 2022 report from the Reproducible Builds project! In our monthly reports we outline the most important things that we have been up to over the past month.
The in-toto project was accepted as an incubating project within the Cloud Native Computing Foundation (CNCF). in-toto is a framework that protects the software supply chain by collecting and verifying relevant data. It does so by enabling libraries to collect information about software supply chain actions and then allowing software users and/or project managers to publish policies about software supply chain practices that can be verified before deploying or installing software. CNCF foundations hosts a number of critical components of the global technology infrastructure under the auspices of the Linux Foundation. (View full announcement.)
Herv Boutemy posted to our mailing list with an announcement that the Java Reproducible Central has hit the milestone of 500 fully reproduced builds of upstream projects . Indeed, at the time of writing, according to the nightly rebuild results, 530 releases were found to be fully reproducible, with 100% reproducible artifacts.
GitBOM is relatively new project to enable build tools trace every source file that is incorporated into build artifacts. As an experiment and/or proof-of-concept, the GitBOM developers are rebuilding Debian to generate side-channel build metadata for versions of Debian that have already been released. This only works because Debian is (partial) reproducible, so one can be sure that that, if the case where build artifacts are identical, any metadata generated during these instrumented builds applies to the binaries that were built and released in the past. More information on their approach is available in README file in the bomsh repository.
Ludovic Courtes has published an academic paper discussing how the performance requirements of high-performance computing are not (as usually assumed) at odds with reproducible builds. The received wisdom is that vendor-specific libraries and platform-specific CPU extensions have resulted in a culture of local recompilation to ensure the best performance, rendering the property of reproducibility unobtainable or even meaningless. In his paper, Ludovic explains how Guix has:
[ ] implemented what we call package multi-versioning for C/C++ software that lacks function multi-versioning and run-time dispatch [ ]. It is another way to ensure that users do not have to trade reproducibility for performance. (full PDF)

Kit Martin posted to the FOSSA blog a post titled The Three Pillars of Reproducible Builds. Inspired by the shock of infiltrated or intentionally broken NPM packages, supply chain attacks, long-unnoticed backdoors , the post goes on to outline the high-level steps that lead to a reproducible build:
It is one thing to talk about reproducible builds and how they strengthen software supply chain security, but it s quite another to effectively configure a reproducible build. Concrete steps for specific languages are a far larger topic than can be covered in a single blog post, but today we ll be talking about some guiding principles when designing reproducible builds. [ ]
The article was discussed on Hacker News.
Finally, Bernhard M. Wiedemann noticed that the GNU Helloworld project varies depending on whether it is being built during a full moon! (Reddit announcement, openSUSE bug report)

Events There will be an in-person Debian Reunion in Hamburg, Germany later this year, taking place from 23 30 May. Although this is a Debian event, there will be some folks from the broader Reproducible Builds community and, of course, everyone is welcome. Please see the event page on the Debian wiki for more information. Bernhard M. Wiedemann posted to our mailing list about a meetup for Reproducible Builds folks at the openSUSE conference in Nuremberg, Germany. It was also recently announced that DebConf22 will take place this year as an in-person conference in Prizren, Kosovo. The pre-conference meeting (or Debcamp ) will take place from 10 16 July, and the main talks, workshops, etc. will take place from 17 24 July.

Misc news Holger Levsen updated the Reproducible Builds website to improve the documentation for the SOURCE_DATE_EPOCH environment variable, both by expanding parts of the existing text [ ][ ] as well as clarifying meaning by removing text in other places [ ]. In addition, Chris Lamb added a Twitter Card to our website s metadata too [ ][ ][ ]. On our mailing list this month:

Distribution work In Debian this month:
  • Johannes Schauer Marin Rodrigues posted to the debian-devel list mentioning that he exploited the property of reproducibility within Debian to demonstrate that automatically converting a large number of packages to a new internal source version did not change the resulting packages. The proposed change could therefore be applied without causing breakage:
So now we have 364 source packages for which we have a patch and for which we can show that this patch does not change the build output. Do you agree that with those two properties, the advantages of the 3.0 (quilt) format are sufficient such that the change shall be implemented at least for those 364? [ ]
In openSUSE, Bernhard M. Wiedemann posted his usual monthly reproducible builds status report.

Tooling diffoscope is our in-depth and content-aware diff utility. Not only can it locate and diagnose reproducibility issues, it can provide human-readable diffs from many kinds of binary formats. This month, Chris Lamb prepared and uploaded versions 207, 208 and 209 to Debian unstable, as well as made the following changes to the code itself:
  • Update minimum version of Black to prevent test failure on Ubuntu jammy. [ ]
  • Updated the R test fixture for the 4.2.x series of the R programming language. [ ]
Brent Spillner also worked on adding graceful handling for UNIX sockets and named pipes to diffoscope. [ ][ ][ ]. Vagrant Cascadian also updated the diffoscope package in GNU Guix. [ ][ ] reprotest is the Reproducible Build s project end-user tool to build the same source code twice in widely different environments and checking whether the binaries produced by the builds have any differences. This month, Santiago Ruano Rinc n added a new --append-build-command option [ ], which was subsequently uploaded to Debian unstable by Holger Levsen.

Upstream patches The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Testing framework The Reproducible Builds project runs a significant testing framework at tests.reproducible-builds.org, to check packages and other artifacts for reproducibility. This month, the following changes were made:
  • Holger Levsen:
    • Replace a local copy of the dsa-check-running-kernel script with a packaged version. [ ]
    • Don t hide the status of offline hosts in the Jenkins shell monitor. [ ]
    • Detect undefined service problems in the node health check. [ ]
    • Update the sources.lst file for our mail server as its still running Debian buster. [ ]
    • Add our mail server to our node inventory so it is included in the Jenkins maintenance processes. [ ]
    • Remove the debsecan package everywhere; it got installed accidentally via the Recommends relation. [ ]
    • Document the usage of the osuosl174 host. [ ]
Regular node maintenance was also performed by Holger Levsen [ ], Vagrant Cascadian [ ][ ][ ] and Mattia Rizzolo.
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

7 April 2022

Gunnar Wolf: How is the free firmware for the Raspberry progressing?

Raspberry Pi computers require a piece of non-free software to boot the infamous raspi-firmware package. But for almost as long as there has been a Raspberry Pi to talk of (this year it turns 10 years old!), there have been efforts to get it to boot using only free software. How is it progressing? Michael Bishop (IRC user clever) explained today in the #debian-raspberrypi channel in OFTC that it advances far better than what I expected: It is even possible to boot a usable system under the RPi2 family! Just There is somewhat incomplete hardware support: For his testing, he has managed to use a xfce environment but over the composite (NTSC) video output, as HDMI initialization support is not there. However, he shared with me several interesting links and videos, and I told him I d share them there are still many issues; I do not believe it is currently worth it to make Debian images with this firmware. Before anything else: Go visit the librerpi/lk-overlay repository. Its README outlines hardware support for each of the RPi families; there is a binary build available with nixos if you want to try it out, and instructions to build it. But what clever showed me that made me write this post Is the amount of stuff you can do with the RPi s VPU (why Vision Vector Processing Unit and not the more familiar GPU, Graphical Processing Unit? I don t really know But I trust clever s definitions beyond how I trust my own ) before it loads an opearting system: There s not too much I can add to this. I was just Truly amazed. And I hope to see the remaining hurdles for regular Linux booting on this range of machines with purely free software quickly go away! Packaging this for Debian? Well, not yet not so fast I first told clever we could push this firmware to experimental instead of unstable, as it is not yet ready for most production systems. However, pabs made some spot-on further questions. And yes, it requires installing three(!) different cross-compilers, one of which vc4-toolchain, for the VPU is free software, but not yet upstreamed, and hence is not available for Debian. Anyway, the talk continued long after I had to go. I have gone a bit over the backlog, but I have to leave now so that will be it as for this blog post

1 February 2022

Paul Wise: FLOSS Activities January 2022

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration
  • Debian BTS: unarchive/reopen/triage bugs for reintroduced packages
  • Debian servers: ping folks about mail forwarding issues
  • Debian wiki: unblock IP addresses, approve accounts

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors The oci-cli, oci-python-sdk, circuitbreaker, autoconf-archive, libpst, purple-discord, sptag work was sponsored. All other work was done on a volunteer basis.

16 January 2022

Chris Lamb: Favourite films of 2021

In my four most recent posts, I went over the memoirs and biographies, the non-fiction, the fiction and the 'classic' novels that I enjoyed reading the most in 2021. But in the very last of my 2021 roundup posts, I'll be going over some of my favourite movies. (Saying that, these are perhaps less of my 'favourite films' than the ones worth remarking on after all, nobody needs to hear that The Godfather is a good movie.) It's probably helpful to remark you that I took a self-directed course in film history in 2021, based around the first volume of Roger Ebert's The Great Movies. This collection of 100-odd movie essays aims to make a tour of the landmarks of the first century of cinema, and I watched all but a handul before the year was out. I am slowly making my way through volume two in 2022. This tome was tremendously useful, and not simply due to the background context that Ebert added to each film: it also brought me into contact with films I would have hardly come through some other means. Would I have ever discovered the sly comedy of Trouble in Paradise (1932) or the touching proto-realism of L'Atalante (1934) any other way? It also helped me to 'get around' to watching films I may have put off watching forever the influential Battleship Potemkin (1925), for instance, and the ur-epic Lawrence of Arabia (1962) spring to mind here. Choosing a 'worst' film is perhaps more difficult than choosing the best. There are first those that left me completely dry (Ready or Not, Written on the Wind, etc.), and those that were simply poorly executed. And there are those that failed to meet their own high opinions of themselves, such as the 'made for Reddit' Tenet (2020) or the inscrutable Vanilla Sky (2001) the latter being an almost perfect example of late-20th century cultural exhaustion. But I must save my most severe judgement for those films where I took a visceral dislike how their subjects were portrayed. The sexually problematic Sixteen Candles (1984) and the pseudo-Catholic vigilantism of The Boondock Saints (1999) both spring to mind here, the latter of which combines so many things I dislike into such a short running time I'd need an entire essay to adequately express how much I disliked it.

Dogtooth (2009) A father, a mother, a brother and two sisters live in a large and affluent house behind a very high wall and an always-locked gate. Only the father ever leaves the property, driving to the factory that he happens to own. Dogtooth goes far beyond any allusion to Josef Fritzl's cellar, though, as the children's education is a grotesque parody of home-schooling. Here, the parents deliberately teach their children the wrong meaning of words (e.g. a yellow flower is called a 'zombie'), all of which renders the outside world utterly meaningless and unreadable, and completely mystifying its very existence. It is this creepy strangeness within a 'regular' family unit in Dogtooth that is both socially and epistemically horrific, and I'll say nothing here of its sexual elements as well. Despite its cold, inscrutable and deadpan surreality, Dogtooth invites all manner of potential interpretations. Is this film about the artificiality of the nuclear family that the West insists is the benchmark of normality? Or is it, as I prefer to believe, something more visceral altogether: an allegory for the various forms of ontological violence wrought by fascism, as well a sobering nod towards some of fascism's inherent appeals? (Perhaps it is both. In 1972, French poststructuralists Gilles and F lix Guattari wrote Anti-Oedipus, which plays with the idea of the family unit as a metaphor for the authoritarian state.) The Greek-language Dogtooth, elegantly shot, thankfully provides no easy answers.

Holy Motors (2012) There is an infamous scene in Un Chien Andalou, the 1929 film collaboration between Luis Bu uel and famed artist Salvador Dal . A young woman is cornered in her own apartment by a threatening man, and she reaches for a tennis racquet in self-defence. But the man suddenly picks up two nearby ropes and drags into the frame two large grand pianos... each leaden with a dead donkey, a stone tablet, a pumpkin and a bewildered priest. This bizarre sketch serves as a better introduction to Leos Carax's Holy Motors than any elementary outline of its plot, which ostensibly follows 24 hours in the life of a man who must play a number of extremely diverse roles around Paris... all for no apparent reason. (And is he even a man?) Surrealism as an art movement gets a pretty bad wrap these days, and perhaps justifiably so. But Holy Motors and Un Chien Andalou serve as a good reminder that surrealism can be, well, 'good, actually'. And if not quite high art, Holy Motors at least demonstrates that surrealism can still unnerving and hilariously funny. Indeed, recalling the whimsy of the plot to a close friend, the tears of laughter came unbidden to my eyes once again. ("And then the limousines...!") Still, it is unclear how Holy Motors truly refreshes surrealism for the twenty-first century. Surrealism was, in part, a reaction to the mechanical and unfeeling brutality of World War I and ultimately sought to release the creative potential of the unconscious mind. Holy Motors cannot be responding to another continental conflagration, and so it appears to me to be some kind of commentary on the roles we exhibit in an era of 'post-postmodernity': a sketch on our age of performative authenticity, perhaps, or an idle doodle on the function and psychosocial function of work. Or perhaps not. After all, this film was produced in a time that offers the near-universal availability of mind-altering substances, and this certainly changes the context in which this film was both created. And, how can I put it, was intended to be watched.

Manchester by the Sea (2016) An absolutely devastating portrayal of a character who is unable to forgive himself and is hesitant to engage with anyone ever again. It features a near-ideal balance between portraying unrecoverable anguish and tender warmth, and is paradoxically grandiose in its subtle intimacy. The mechanics of life led me to watch this lying on a bed in a chain hotel by Heathrow Airport, and if this colourless circumstance blunted the film's emotional impact on me, I am probably thankful for it. Indeed, I find myself reduced in this review to fatuously recalling my favourite interactions instead of providing any real commentary. You could write a whole essay about one particular incident: its surfaces, subtexts and angles... all despite nothing of any substance ever being communicated. Truly stunning.

McCabe & Mrs. Miller (1971) Roger Ebert called this movie one of the saddest films I have ever seen, filled with a yearning for love and home that will not ever come. But whilst it is difficult to disagree with his sentiment, Ebert's choice of sad is somehow not quite the right word. Indeed, I've long regretted that our dictionaries don't have more nuanced blends of tragedy and sadness; perhaps the Ancient Greeks can loan us some. Nevertheless, the plot of this film is of a gambler and a prostitute who become business partners in a new and remote mining town called Presbyterian Church. However, as their town and enterprise booms, it comes to the attention of a large mining corporation who want to bully or buy their way into the action. What makes this film stand out is not the plot itself, however, but its mood and tone the town and its inhabitants seem to be thrown together out of raw lumber, covered alternatively in mud or frozen ice, and their days (and their personalities) are both short and dark in equal measure. As a brief aside, if you haven't seen a Roger Altman film before, this has all the trappings of being a good introduction. As Ebert went on to observe: This is not the kind of movie where the characters are introduced. They are all already here. Furthermore, we can see some of Altman's trademark conversations that overlap, a superb handling of ensemble casts, and a quietly subversive view of the tyranny of 'genre'... and the latter in a time when the appetite for revisionist portrays of the West was not very strong. All of these 'Altmanian' trademarks can be ordered in much stronger measures in his later films: in particular, his comedy-drama Nashville (1975) has 24 main characters, and my jejune interpretation of Gosford Park (2001) is that it is purposefully designed to poke fun those who take a reductionist view of 'genre', or at least on the audience's expectations. (In this case, an Edwardian-era English murder mystery in the style of Agatha Christie, but where no real murder or detection really takes place.) On the other hand, McCabe & Mrs. Miller is actually a poor introduction to Altman. The story is told in a suitable deliberate and slow tempo, and the two stars of the film are shown thoroughly defrocked of any 'star status', in both the visual and moral dimensions. All of these traits are, however, this film's strength, adding up to a credible, fascinating and riveting portrayal of the old West.

Detour (1945) Detour was filmed in less than a week, and it's difficult to decide out of the actors and the screenplay which is its weakest point.... Yet it still somehow seemed to drag me in. The plot revolves around luckless Al who is hitchhiking to California. Al gets a lift from a man called Haskell who quickly falls down dead from a heart attack. Al quickly buries the body and takes Haskell's money, car and identification, believing that the police will believe Al murdered him. An unstable element is soon introduced in the guise of Vera, who, through a set of coincidences that stretches credulity, knows that this 'new' Haskell (ie. Al pretending to be him) is not who he seems. Vera then attaches herself to Al in order to blackmail him, and the world starts to spin out of his control. It must be understood that none of this is executed very well. Rather, what makes Detour so interesting to watch is that its 'errors' lend a distinctively creepy and unnatural hue to the film. Indeed, in the early twentieth century, Sigmund Freud used the word unheimlich to describe the experience of something that is not simply mysterious, but something creepy in a strangely familiar way. This is almost the perfect description of watching Detour its eerie nature means that we are not only frequently second-guessed about where the film is going, but are often uncertain whether we are watching the usual objective perspective offered by cinema. In particular, are all the ham-fisted segues, stilted dialogue and inscrutable character motivations actually a product of Al inventing a story for the viewer? Did he murder Haskell after all, despite the film 'showing' us that Haskell died of natural causes? In other words, are we watching what Al wants us to believe? Regardless of the answers to these questions, the film succeeds precisely because of its accidental or inadvertent choices, so it is an implicit reminder that seeking the director's original intention in any piece of art is a complete mirage. Detour is certainly not a good film, but it just might be a great one. (It is a short film too, and, out of copyright, it is available online for free.)

Safe (1995) Safe is a subtly disturbing film about an upper-middle-class housewife who begins to complain about vague symptoms of illness. Initially claiming that she doesn't feel right, Carol starts to have unexplained headaches, a dry cough and nosebleeds, and eventually begins to have trouble breathing. Carol's family doctor treats her concerns with little care, and suggests to her husband that she sees a psychiatrist. Yet Carol's episodes soon escalate. For example, as a 'homemaker' and with nothing else to occupy her, Carol's orders a new couch for a party. But when the store delivers the wrong one (although it is not altogether clear that they did), Carol has a near breakdown. Unsure where to turn, an 'allergist' tells Carol she has "Environmental Illness," and so Carol eventually checks herself into a new-age commune filled with alternative therapies. On the surface, Safe is thus a film about the increasing about of pesticides and chemicals in our lives, something that was clearly felt far more viscerally in the 1990s. But it is also a film about how lack of genuine healthcare for women must be seen as a critical factor in the rise of crank medicine. (Indeed, it made for something of an uncomfortable watch during the coronavirus lockdown.) More interestingly, however, Safe gently-yet-critically examines the psychosocial causes that may be aggravating Carol's illnesses, including her vacant marriage, her hollow friends and the 'empty calorie' stimulus of suburbia. None of this should be especially new to anyone: the gendered Victorian term 'hysterical' is often all but spoken throughout this film, and perhaps from the very invention of modern medicine, women's symptoms have often regularly minimised or outright dismissed. (Hilary Mantel's 2003 memoir, Giving Up the Ghost is especially harrowing on this.) As I opened this review, the film is subtle in its messaging. Just to take one example from many, the sound of the cars is always just a fraction too loud: there's a scene where a group is eating dinner with a road in the background, and the total effect can be seen as representing the toxic fumes of modernity invading our social lives and health. I won't spoiler the conclusion of this quietly devasting film, but don't expect a happy ending.

The Driver (1978) Critics grossly misunderstood The Driver when it was first released. They interpreted the cold and unemotional affect of the characters with the lack of developmental depth, instead of representing their dissociation from the society around them. This reading was encouraged by the fact that the principal actors aren't given real names and are instead known simply by their archetypes instead: 'The Driver', 'The Detective', 'The Player' and so on. This sort of quasi-Jungian erudition is common in many crime films today (Reservoir Dogs, Kill Bill, Layer Cake, Fight Club), so the critics' misconceptions were entirely reasonable in 1978. The plot of The Driver involves the eponymous Driver, a noted getaway driver for robberies in Los Angeles. His exceptional talent has far prevented him from being captured thus far, so the Detective attempts to catch the Driver by pardoning another gang if they help convict the Driver via a set-up robbery. To give himself an edge, however, The Driver seeks help from the femme fatale 'Player' in order to mislead the Detective. If this all sounds eerily familiar, you would not be far wrong. The film was essentially remade by Nicolas Winding Refn as Drive (2011) and in Edgar Wright's 2017 Baby Driver. Yet The Driver offers something that these neon-noir variants do not. In particular, the car chases around Los Angeles are some of the most captivating I've seen: they aren't thrilling in the sense of tyre squeals, explosions and flying boxes, but rather the vehicles come across like wild animals hunting one another. This feels especially so when the police are hunting The Driver, which feels less like a low-stakes game of cat and mouse than a pack of feral animals working together a gang who will tear apart their prey if they find him. In contrast to the undercar neon glow of the Fast & Furious franchise, the urban realism backdrop of the The Driver's LA metropolis contributes to a sincere feeling of artistic fidelity as well. To be sure, most of this is present in the truly-excellent Drive, where the chase scenes do really communicate a credible sense of stakes. But the substitution of The Driver's grit with Drive's soft neon tilts it slightly towards that common affliction of crime movies: style over substance. Nevertheless, I can highly recommend watching The Driver and Drive together, as it can tell you a lot about the disconnected socioeconomic practices of the 1980s compared to the 2010s. More than that, however, the pseudo-1980s synthwave soundtrack of Drive captures something crucial to analysing the world of today. In particular, these 'sounds from the past filtered through the present' bring to mind the increasing role of nostalgia for lost futures in the culture of today, where temporality and pop culture references are almost-exclusively citational and commemorational.

The Souvenir (2019) The ostensible outline of this quietly understated film follows a shy but ambitious film student who falls into an emotionally fraught relationship with a charismatic but untrustworthy older man. But that doesn't quite cover the plot at all, for not only is The Souvenir a film about a young artist who is inspired, derailed and ultimately strengthened by a toxic relationship, it is also partly a coming-of-age drama, a subtle portrait of class and, finally, a film about the making of a film. Still, one of the geniuses of this truly heartbreaking movie is that none of these many elements crowds out the other. It never, ever feels rushed. Indeed, there are many scenes where the camera simply 'sits there' and quietly observes what is going on. Other films might smother themselves through references to 18th-century oil paintings, but The Souvenir somehow evades this too. And there's a certain ring of credibility to the story as well, no doubt in part due to the fact it is based on director Joanna Hogg's own experiences at film school. A beautifully observed and multi-layered film; I'll be happy if the sequel is one-half as good.

The Wrestler (2008) Randy 'The Ram' Robinson is long past his prime, but he is still rarin' to go in the local pro-wrestling circuit. Yet after a brutal beating that seriously threatens his health, Randy hangs up his tights and pursues a serious relationship... and even tries to reconnect with his estranged daughter. But Randy can't resist the lure of the ring, and readies himself for a comeback. The stage is thus set for Darren Aronofsky's The Wrestler, which is essentially about what drives Randy back to the ring. To be sure, Randy derives much of his money from wrestling as well as his 'fitness', self-image, self-esteem and self-worth. Oh, it's no use insisting that wrestling is fake, for the sport is, needless to say, Randy's identity; it's not for nothing that this film is called The Wrestler. In a number of ways, The Sound of Metal (2019) is both a reaction to (and a quiet remake of) The Wrestler, if only because both movies utilise 'cool' professions to explore such questions of identity. But perhaps simply when The Wrestler was produced makes it the superior film. Indeed, the role of time feels very important for the Wrestler. In the first instance, time is clearly taking its toll on Randy's body, but I felt it more strongly in the sense this was very much a pre-2008 film, released on the cliff-edge of the global financial crisis, and the concomitant precarity of the 2010s. Indeed, it is curious to consider that you couldn't make The Wrestler today, although not because the relationship to work has changed in any fundamentalway. (Indeed, isn't it somewhat depressing the realise that, since the start of the pandemic and the 'work from home' trend to one side, we now require even more people to wreck their bodies and mental health to cover their bills?) No, what I mean to say here is that, post-2016, you cannot portray wrestling on-screen without, how can I put it, unwelcome connotations. All of which then reminds me of Minari's notorious red hat... But I digress. The Wrestler is a grittily stark darkly humorous look into the life of a desperate man and a sorrowful world, all through one tragic profession.

Thief (1981) Frank is an expert professional safecracker and specialises in high-profile diamond heists. He plans to use his ill-gotten gains to retire from crime and build a life for himself with a wife and kids, so he signs on with a top gangster for one last big score. This, of course, could be the plot to any number of heist movies, but Thief does something different. Similar to The Wrestler and The Driver (see above) and a number of other films that I watched this year, Thief seems to be saying about our relationship to work and family in modernity and postmodernity. Indeed, the 'heist film', we are told, is an understudied genre, but part of the pleasure of watching these films is said to arise from how they portray our desired relationship to work. In particular, Frank's desire to pull off that last big job feels less about the money it would bring him, but a displacement from (or proxy for) fulfilling some deep-down desire to have a family or indeed any relationship at all. Because in theory, of course, Frank could enter into a fulfilling long-term relationship right away, without stealing millions of dollars in diamonds... but that's kinda the entire point: Frank needing just one more theft is an excuse to not pursue a relationship and put it off indefinitely in favour of 'work'. (And being Federal crimes, it also means Frank cannot put down meaningful roots in a community.) All this is communicated extremely subtly in the justly-lauded lowkey diner scene, by far the best scene in the movie. The visual aesthetic of Thief is as if you set The Warriors (1979) in a similarly-filthy Chicago, with the Xenophon-inspired plot of The Warriors replaced with an almost deliberate lack of plot development... and the allure of The Warriors' fantastical criminal gangs (with their alluringly well-defined social identities) substituted by a bunch of amoral individuals with no solidarity beyond the immediate moment. A tale of our time, perhaps. I should warn you that the ending of Thief is famously weak, but this is a gritty, intelligent and strangely credible heist movie before you get there.

Uncut Gems (2019) The most exhausting film I've seen in years; the cinematic equivalent of four cups of double espresso, I didn't even bother even trying to sleep after downing Uncut Gems late one night. Directed by the two Safdie Brothers, it often felt like I was watching two films that had been made at the same time. (Or do I mean two films at 2X speed?) No, whatever clumsy metaphor you choose to adopt, the unavoidable effect of this film's finely-tuned chaos is an uncompromising and anxiety-inducing piece of cinema. The plot follows Howard as a man lost to his countless vices mostly gambling with a significant side hustle in adultery, but you get the distinct impression he would be happy with anything that will give him another high. A true junkie's junkie, you might say. You know right from the beginning it's going to end in some kind of disaster, the only question remaining is precisely how and what. Portrayed by an (almost unrecognisable) Adam Sandler, there's an uncanny sense of distance in the emotional chasm between 'Sandler-as-junkie' and 'Sandler-as-regular-star-of-goofy-comedies'. Yet instead of being distracting and reducing the film's affect, this possibly-deliberate intertextuality somehow adds to the masterfully-controlled mayhem. My heart races just at the memory. Oof.

Woman in the Dunes (1964) I ended up watching three films that feature sand this year: Denis Villeneuve's Dune (2021), Lawrence of Arabia (1962) and Woman in the Dunes. But it is this last 1964 film by Hiroshi Teshigahara that will stick in my mind in the years to come. Sure, there is none of the Medician intrigue of Dune or the Super Panavision-70 of Lawrence of Arabia (or its quasi-orientalist score, itself likely stolen from Anton Bruckner's 6th Symphony), but Woman in the Dunes doesn't have to assert its confidence so boldly, and it reveals the enormity of its plot slowly and deliberately instead. Woman in the Dunes never rushes to get to the film's central dilemma, and it uncovers its terror in little hints and insights, all whilst establishing the daily rhythm of life. Woman in the Dunes has something of the uncanny horror as Dogtooth (see above), as well as its broad range of potential interpretations. Both films permit a wide array of readings, without resorting to being deliberately obscurantist or being just plain random it is perhaps this reason why I enjoyed them so much. It is true that asking 'So what does the sand mean?' sounds tediously sophomoric shorn of any context, but it somehow applies to this thoughtfully self-contained piece of cinema.

A Quiet Place (2018) Although A Quiet Place was not actually one of the best films I saw this year, I'm including it here as it is certainly one of the better 'mainstream' Hollywood franchises I came across. Not only is the film very ably constructed and engages on a visceral level, I should point out that it is rare that I can empathise with the peril of conventional horror movies (and perhaps prefer to focus on its cultural and political aesthetics), but I did here. The conceit of this particular post-apocalyptic world is that a family is forced to live in almost complete silence while hiding from creatures that hunt by sound alone. Still, A Quiet Place engages on an intellectual level too, and this probably works in tandem with the pure 'horrorific' elements and make it stick into your mind. In particular, and to my mind at least, A Quiet Place a deeply American conservative film below the surface: it exalts the family structure and a certain kind of sacrifice for your family. (The music often had a passacaglia-like strain too, forming a tombeau for America.) Moreover, you survive in this dystopia by staying quiet that is to say, by staying stoic suggesting that in the wake of any conflict that might beset the world, the best thing to do is to keep quiet. Even communicating with your loved ones can be deadly to both of you, so not emote, acquiesce quietly to your fate, and don't, whatever you do, speak up. (Or join a union.) I could go on, but The Quiet Place is more than this. It's taut and brief, and despite cinema being an increasingly visual medium, it encourages its audience to develop a new relationship with sound.

Russell Coker: SSD Endurance

I previously wrote about the issue of swap potentially breaking SSD [1]. My conclusion was that swap wouldn t be a problem as no normally operating systems that I run had swap using any significant fraction of total disk writes. In that post the most writes I could see was 128GB written per day on a 120G Intel SSD (writing the entire device once a day). My post about swap and SSD was based on the assumption that you could get many thousands of writes to the entire device which was incorrect. Here s a background on the terminology from WD [2]. So in the case of the 120G Intel SSD I was doing over 1 DWPD (Drive Writes Per Day) which is in the middle of the range of SSD capability, Intel doesn t specify the DWPD or TBW (Tera Bytes Written) for that device. The most expensive and high end NVMe device sold by my local computer store is the Samsung 980 Pro which has a warranty of 150TBW for the 250G device and 600TBW for the 1TB device [3]. That means that the system which used to have an Intel SSD would have exceeded the warranty in 3 years if it had a 250G device. My current workstation has been up for just over 7 days and has averaged 110GB written per day. It has some light VM use and the occasional kernel compile, a fairly typical developer workstation. It s storage is 2*Crucial 1TB NVMe devices in a BTRFS RAID-1, the NVMe devices are the old series of Crucial ones and are rated for 200TBW which means that they can be expected to last for 5 years under the current load. This isn t a real problem for me as the performance of those devices is lower than I hoped for so I will buy faster ones before they are 5yo anyway. My home server (and my wife s workstation) is averaging 325GB per day on the SSDs used for the RAID-1 BTRFS filesystem for root and for most data that is written much (including VMs). The SSDs are 500G Samsung 850 EVOs [4] which are rated at 150TBW which means just over a year of expected lifetime. The SSDs are much more than a year old, I think Samsung stopped selling them more than a year ago. Between the 2 SSDs SMART reports 18 uncorrectable errors and btrfs device stats reports 55 errors on one of them. I m not about to immediately replace them, but it appears that they are well past their prime. The server which runs my blog (among many other things) is averaging over 1TB written per day. It currently has a RAID-1 of hard drives for all storage but it s previous incarnation (which probably had about the same amount of writes) had a RAID-1 of enterprise SSDs for the most written data. After a few years of running like that (and some time running with someone else s load before it) the SSDs became extremely slow (sustained writes of 15MB/s) and started getting errors. So that s a pair of SSDs that were burned out. Conclusion The amounts of data being written are steadily increasing. Recent machines with more RAM can decrease storage usage in some situations but that doesn t compare to the increased use of checksummed and logged filesystems, VMs, databases for local storage, and other things that multiply writes. The amount of writes allowed under warranty isn t increasing much and there are new technologies for larger SSD storage that decrease the DWPD rating of the underlying hardware. For the systems I own it seems that they are all going to exceed the rated TBW for the SSDs before I have other reasons to replace them, and they aren t particularly high usage systems. A mail server for a large number of users would hit it much earlier. RAID of SSDs is a really good thing. Replacement of SSDs is something that should be planned for and a way of swapping SSDs to less important uses is also good (my parents have some SSDs that are too small for my current use but which work well for them). Another thing to consider is that if you have a server with spare drive bays you could put some extra SSDs in to spread the wear among a larger RAID-10 array. Instead of having a 2*SSD BTRFS RAID-1 for a server you could have 6*SSD to get a 3* longer lifetime than a regular RAID-1 before the SSDs wear out (BTRFS supports this sort of thing). Based on these calculations and the small number of errors I ve seen on my home server I ll add a 480G SSD I have lying around to the array to spread the load and keep it running for a while longer.

12 November 2021

Adnan Hodzic: wp-k8s: WordPress on privately hosted Kubernetes cluster (Raspberry Pi 4 + Synology)

Blog post you re reading right now is privately hosted on Raspberry PI 4 Kubernetes cluster with its data coming from NFS share and MariaDB on... The post wp-k8s: WordPress on privately hosted Kubernetes cluster (Raspberry Pi 4 + Synology) appeared first on FoolControl: Phear the penguin.

17 October 2021

Adnan Hodzic: wp-k8s: WordPress on Kubernetes project (GKE, cloud SQL, NFS, cluster autoscaling, HPA, VPA, Ingress, Let s Encrypt)

Title of this blog post isn t a collection of every Kubernetes related buzzword I could think of. It s a collection of technologies that went into... The post wp-k8s: WordPress on Kubernetes project (GKE, cloud SQL, NFS, cluster autoscaling, HPA, VPA, Ingress, Let s Encrypt) appeared first on FoolControl: Phear the penguin.

15 October 2021

Adnan Hodzic: Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

20 June 2021

Julian Andres Klode: Migrating away from apt-key

This is an edited copy of an email I sent to provide guidance to users of apt-key as to how to handle things in a post apt-key world. The manual page already provides all you need to know for replacing apt-key add usage:
Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either gpg or asc as file extension
So it s kind of surprising people need step by step instructions for how to copy/download a file into a directory. I ll also discuss the alternative security snakeoil approach with signed-by that s become popular. Maybe we should not have added signed-by, people seem to forget that debs still run maintainer scripts as root. Aside from this email, Debian users should look into extrepo, which manages curated external repositories for you.

Direct translation Assume you currently have:
wget -qO- https://myrepo.example/myrepo.asc   sudo apt-key add  
To translate this directly for bionic and newer, you can use:
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.asc https://myrepo.example/myrepo.asc
or to avoid downloading as root:
wget -qO-  https://myrepo.example/myrepo.asc   sudo tee -a /etc/apt/trusted.gpg.d/myrepo.asc
Older (and all) releases only support unarmored files with an extension .gpg. If you care about them, provide one, and use
sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.gpg https://myrepo.example/myrepo.gpg
Some people will tell you to download the .asc and pipe it to gpg --dearmor, but gpg might not be installed, so really, just offer a .gpg one instead that is supported on all systems. wget might not be available everywhere so you can use apt-helper:
sudo /usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /etc/apt/trusted.gpg.d/myrepo.asc
or, to avoid downloading as root:
/usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /tmp/myrepo.asc && sudo mv /tmp/myrepo.asc /etc/apt/trusted.gpg.d

Pretending to be safer by using signed-by People say it s good practice to not use trusted.gpg.d and install the file elsewhere and then refer to it from the sources.list entry by using signed-by=<path to the file>. So this looks a lot safer, because now your key can t sign other unrelated repositories. In practice, security increase is minimal, since package maintainer scripts run as root anyway. But I guess it s better for publicity :) As an example, here are the instructions to install signal-desktop from signal.org. As mentioned, gpg --dearmor use in there is not a good idea, and I d personally not tell people to modify /usr as it s supposed to be managed by the package manager, but we don t have an /etc/apt/keyrings or similar at the moment; it s fine though if the keyring is installed by the package. You can also just add the file there as a starting point, and then install a keyring package overriding it (pretend there is a signal-desktop-keyring package below that would override the .gpg we added).
# NOTE: These instructions only work for 64 bit Debian-based
# Linux distributions such as Ubuntu, Mint etc.
# 1. Install our official public software signing key
wget -O- https://updates.signal.org/desktop/apt/keys.asc   gpg --dearmor > signal-desktop-keyring.gpg
cat signal-desktop-keyring.gpg   sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null
# 2. Add our repository to your list of repositories
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main'  \
  sudo tee -a /etc/apt/sources.list.d/signal-xenial.list
# 3. Update your package database and install signal
sudo apt update && sudo apt install signal-desktop
I do wonder why they do wget gpg --dearmor, pipe that into the file and then cat sudo tee it, instead of having that all in one pipeline. Maybe they want nicer progress reporting.

Scenario-specific guidance We have three scenarios: For system image building, shipping the key in /etc/apt/trusted.gpg.d seems reasonable to me; you are the vendor sort of, so it can be globally trusted. Chrome-style debs and repository config debs: If you ship a deb, embedding the sources.list.d snippet (calling it $myrepo.list) and shipping a $myrepo.gpg in /usr/share/keyrings is the best approach. Whether you ship that in product debs aka vscode/chromium or provide a repository configuration deb (let s call it myrepo-repo.deb) and then tell people to run apt update followed by apt install <package inside the repo> depends on how many packages are in the repo, I guess. Manual instructions (signal style): The third case, where you tell people to run wget themselves, I find tricky. As we see in signal, just stuffing keyring files into /usr/share/keyrings is popular, despite /usr supposed to be managed by the package manager. We don t have another dir inside /etc (or /usr/local), so it s hard to suggest something else. There s no significant benefit from actually using signed-by, so it s kind of extra work for little gain, though.

Addendum: Future work This part is new, just for this blog post. Let s look at upcoming changes and how they make things easier.

Bundled .sources files Assuming I get my merge request merged, the next version of APT (2.4/2.3.something) will do away with all the complexity and allow you to embed the key directly into a deb822 .sources file (which have been available for some time now):
Types: deb
URIs: https://myrepo.example/ https://myotherrepo.example/
Suites: stable not-so-stable
Components: main
Signed-By:
 -----BEGIN PGP PUBLIC KEY BLOCK-----
 .
 mDMEYCQjIxYJKwYBBAHaRw8BAQdAD/P5Nvvnvk66SxBBHDbhRml9ORg1WV5CvzKY
 CuMfoIS0BmFiY2RlZoiQBBMWCgA4FiEErCIG1VhKWMWo2yfAREZd5NfO31cFAmAk
 IyMCGyMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQREZd5NfO31fbOwD6ArzS
 dM0Dkd5h2Ujy1b6KcAaVW9FOa5UNfJ9FFBtjLQEBAJ7UyWD3dZzhvlaAwunsk7DG
 3bHcln8DMpIJVXht78sL
 =IE0r
 -----END PGP PUBLIC KEY BLOCK-----
Then you can just provide a .sources files to users, they place it into sources.list.d, and everything magically works Probably adding a nice apt add-source command for it I guess. Well, python-apt s aptsources package still does not support deb822 sources, and never will, we ll need an aptsources2 for that for backwards-compatibility reasons, and then port software-properties and other users to it.

OpenPGP vs aptsign We do have a better, tighter replacement for gpg in the works which uses Ed25519 keys to sign Release files. It s temporarily named aptsign, but it s a generic signer for single-section deb822 files, similar to signify/minisign. We believe that this solves the security nightmare that our OpenPGP integration is while reducing complexity at the same time. Keys are much shorter, so the bundled sources file above will look much nicer.

16 June 2021

Julien Danjou: Python Tools to Try in 2021

Python Tools to Try in 2021The Python programming language is one of the most popular and in huge demand. It is free, has a large community, is intended for the development of projects of varying complexity, is easy to learn, and opens up great opportunities for programmers. To work comfortably with it, you need special Python tools, which are able to simplify your work. We have selected the best Python tools that will be relevant in 2021.

MailtrapAs you may probably know, in order to send an email, you need SMTP (Simple Mail Transfer Protocol). This is because you can't just send a letter to the recipient. It needs to be sent to the server from which the recipient will download this letter using IMAP and POP3.Mailtrap provides an opportunity to send emails in python. Moreover, Mailtrap provides #rest #api to access current emails. It can be used to automate email testing, which will improve your email marketing campaigns. For example, you can check the password recovery form in the Selenium Test and immediately see if an email was sent to the correct address. Then take a new password from the email and try to enter the site with it. Cool, isn't it?

Pros
  • All emails are in one place.
  • Mailtrap provides multiple inboxes.
  • Shared access is present.
  • It is easy to set up.
  • RESTful API

ConsNo visible disadvantages were found.

Django
Python Tools to Try in 2021
Django is a free and open-source full-stack framework. It is one of the most important and popular among Python developers. It helps you move from a prototype to a ready-made working solution in a short time since its main task is to automate processes and speed up work through associations and libraries. It s a great choice for a product launch.You can use Django if at least a few of the following points interest you:
  • There is a need to develop the server-side of the API.
  • You need to develop a web application.
  • In the course of work, many changes are made, you have to constantly deploy the application and make edits.
  • There are many complex tasks that are difficult to solve on your own, and you will need the help of the community.
  • ORM support is needed to avoid accessing the database directly.
  • There is a need to integrate new technologies such as machine learning.
Django is a great Python Web Framework that does its job. It is not for nothing that it is one of the most popular, and is actively used by millions of developers.

ProsDjango has quite a few advantages. It contains a large number of ready-made solutions, which greatly simplifies development. Admin panel, database migration, various forms, user authentication tools are extremely helpful. The structure is very clear and simple.A large community helps to solve almost any problem. Thanks to ORM, there is a high level of security and it is comfortable to work with databases.

ConsDespite its powerful capabilities, Django's Python Web Framework has drawbacks. It is very massive, monolithic, therefore it develops slowly. Despite the many generic modules, the development speed of Django itself is reduced.

CherryPy
Python Tools to Try in 2021
CherryPy is a micro-framework. It is designed to solve specific problems, capable of running the program on any operating system. CherryPy is used in the following cases:
  • To create an application with small code size.
  • There is a need to manage several servers at the same time.
  • You need to monitor the performance of applications.
CherryPy refers to Python Frameworks, which are designed for specific tasks. It's clear, user-friendly, and ideal for Android development.

ProsCherryPy Python tool has a friendly and understandable development environment. This is a functional and complete framework, which can be used to build good applications. The source code is open, so the platform is completely free for developers, and the community, although not too large, is very responsive, and always helps to solve problems.

ConsThere are not so many cons to this Python tool. It is not capable of performing complex tasks and functions, it is intended more for specific solutions, for example, for the development of certain plugins or modules.

Pyramid
Python Tools to Try in 2021
Python Pyramid tool is designed for programming complex objects and solving multifunctional problems. It is used by professional programmers and is traditionally used for identification and routing. It is aimed at a wide audience and is capable of developing API prototypes.It is used in the following cases:
  • You need problem indicator tools to make timely adjustments and edits.
  • You use several programming languages at once;
  • You work with reporting and financial calculations, forecasting;
  • You need to quickly create a simple application.
At the same time, the Python Web Framework Pyramid allows you to create complex applications with great functionality like a translation software.

ProsPyramid does an excellent job of developing basic applications quickly. It is quite flexible and easy to learn. In fact, the key to the success of this framework is that it is completely based on fundamental principles, using simple and basic programming techniques. It is minimalistic, but at the same time offers users a lot of freedom of action. It is able to work with both small applications and powerful multifunctional programs.

ConsIt is difficult to deviate from the basic principles. This Python tool makes the decision for you. Simple programs are very easy to implement. But to do something complex and large-scale, you have to completely immerse yourself in the study of the environment and obey it.

Grok
Python Tools to Try in 2021
Grok is a Python tool that works with templates. Its main task is to eliminate repetitions in the code. If the element is repeated, then the template that was already created earlier is simply applied. This greatly simplifies and speeds up the work.Grok suits developers in the following cases:
  • If a programmer has little experience and is not yet ready to develop his modules.
  • There is a need to quickly develop a simple application.
  • The functionality of the application is simple, straightforward, and the interface does not play a key role.

ProsThe Grok framework is a child of Zope3, which was released earlier. It has a simplified structure of work, easy installation of modules, more capabilities, and better flexibility. It is designed to develop small applications. Yes, it is not intended for complex work, but due to its functionality, it allows you to quickly implement a project.

ConsThe Grok community is not very large, as this Python tool has not gained widespread popularity. Nevertheless, it is used by Python adepts for comfortable development. It is impossible to implement complex tasks on it since the possibilities are quite limited.Grok is one of the best Python Web Frameworks. It is understandable and has enough features for comfortable development.

Web2Py
Python Tools to Try in 2021
Web2Py is a Python tool that has its own IDEwhich, which includes a code editor, debugger, and deployment. It works great without the need for configuration or installation, provides a high level of data security, and is suitable for work on various platforms.Web2Py is great in the following cases:
  • When there is a need to develop something on different operating systems.
  • If there is no way to install and configure the framework.
  • When a high level of data security is required, for example, when developing financial applications or sales performance management tools.
  • If you need to carefully track bugs right during development, and not during the testing phase.

ProsWeb2Py is capable of working with different protocols, has a built-in error tracker, and has a backward compatibility feature that helps to work on the basis of previous versions of the framework. This means that code maintenance becomes much easier and cheaper. It's free, open-source, and very flexible.

ConsAmong the many Python tools, there are not many that require the latest version of the language. Web2Py is one of those and won't work on Python 3 and below. Therefore, you need to constantly monitor the updates.Web2Py does an excellent job of its tasks. It is quite simple and accessible to everyone.

BlueBream
Python Tools to Try in 2021
BlueBream used to be called Zope3 before. It copes well with tasks of the medium and high level of complexity and is suitable for working on serious projects.

ProsThe BlueBream build system is quite powerful and suitable for complex tasks. You can create functional applications on it, and the principle of reuse of components makes the code easier. At the same time, the speed of development increases. The software can be scaled, and a transactional object database provides an easy path to store it. This means that queries are processed quickly and database management is simple.

ConsThis is not a very flexible framework, it is better to know clearly in advance what is required of it. In addition, it cannot withstand heavy loads. When working with 1000 users at the same time, it can crash and give errors. Therefore, it should be used to solve narrow problems.Python frameworks are often designed for specific tasks. BlueBream is one of these and is suitable for applications where database management plays a key role.

ConclusionPython tools come in different forms and have vastly different capabilities. There are quite a few of them, but in 2021 these will be the most popular and in demand. Experienced programmers always choose several development tools for their comfortable work.

3 June 2021

John Goerzen: Roundup of Unique Data/Storage Hosting Options

Recently I have been taking another look at the services at rsync.net and it got me thinking: what would I do with a lot of storage? What might I want to run with it, if it were fairly cheap? Let s start taking a look at what s out there. I m going to try to focus on things that are unique for some reason: pricing, features, etc. Incidentally, good reviews are hard to find due to the proliferation of affiliate links. I have no affiliate relationships with anyone mentioned here and there are no affiliate links in this post. I ll start with the highest-end community and commercial options (though both are quite competitive on price for what they are), and then move on to the cheaper options. Community option: SDF SDF is somewhat hard to define. What is SDF? could prompt answers like: There s a lot there. SDF lets you use things for yourself, of course, but you can also join a community. It s not a commercial service backed by SLAs it s best-effort but it s been around more than 30 years and has a great track record. Top commercial option for backup storage: rsync.net rsync.net offers storage broadly over SSH: sftp, rsync, scp, borg, rclone, restic, git-annex, git, and such. You do not get a shell, but you do get to run a few noninteractive commands via ssh. You can, for instance, run git clone on the rsync server. The rsync special sauce is in ZFS. They run raidz3 on their arrays (and also offer dual location setups for an additional fee), offer both free and paid ZFS snapshots, etc. The service is designed to be extremely reliable, particularly for backups, and it seems to me to meet those goals. Basic storage is $0.025 per GB/mo, but with certain account types such as borg, can be had for $0.015 per GB/mo. The minimum size is 400GB or $10/mo. There are no bandwidth charges. This makes it quite economical even compared to, say, S3. Additional discounts start at 10TB, so 10TB with rsync.net would cost $204.80/mo or $81.92 on the borg plan. You won t run Nextcloud on this thing, but for backups that must be reliable, or even a photo collection or something, it makes perfect sense. When you look into other options, you ll find that other providers are a lot more vague about their storage setup than rsync.net. Various offerings from Hetzner Hetzner is one of Europe s large hosting companies, and they have several options of interest. Their Storage Box competes directly with the rsync.net service. Their per-GB storage cost is lower than rsync.net, and although they do include a certain amount of free bandwidth with each account, bandwidth is not unlimited and could result in charges. Still, if you don t drive 2x or more your storage usage in bandwidth each month, it would be cheaper than rsync. The Storage Box also uses ZFS with some kind of redundancy, though they don t specifcy details. What differentiates them from rsync.net is the protocol support. They support sftp, scp, Borg, ssh, rsync, etc. just as rsync.net does. But then they also throw in Samba/CIFS, FTPS, HTTPS, and WebDAV all optionally enabled or disabled by you. Although things like sshfs exist, they aren t particularly optimal for some use cases, and CIFS support may just be what you need in some situations. 10TB with Hetzner would cost EUR 39.90/mo, or about $48.84/mo. (This figure is higher for Europeans, who also have to pay VAT.) Hetzner also offers a Storage Share, which is a private Nextcloud instance. 10TB of that is exactly the same cost as 10TB of the Storage Box. You can add your own users, groups, etc. to this as your are the Nextcloud admin of your instance. Hetzner throws in automatic updates (which is great, as updates have been a pain in my side for a long time). Nextcloud is ideal for things like photo sharing, even has email and chat built in, etc. For about the same price at 2TB of Google One, you can have 2TB of Nextcloud with all those services for yourself. Not bad. You can also mount a Nextcloud instance with WebDAV. Interestingly, Nextcloud supports external storages as backend for the data. It supports another Nextcloud instance, OpenStack or S3 object storage, and SFTP, SMB/CIFS, and WebDAV. If you re thinking you d like both SFTP and Nextcloud access to a pool of storage, I imagine you could always get a large Storage Box from Hetzner (internal transfer is free), pair it with a small Nextcloud instance, and link the two with Nextcloud external storage. Dedicated Servers If you want a more DIY approach, you can find some interesting deals on actual dedicated server hardware you get the entire machine to yourself. I ve been using OVH s SoYouStart for a number of years, with good experienaces, and they have a number of server configurations available. For instance, for $45.99, you can get a Xeon box with 4x2TB drives and 32GB RAM. With RAID5 or raidz1, that s 6TB of available space and cheaper than the 6TB from rsync.net (though less redundant) plus you get the whole box to yourself too. OVH directly has some more storage servers; for instance, you can get a box with 4x4TB + 1x500GB SSD for $86.75/mo, giving you 12TB available with RAID5/raidz1, plus a 16GB server to do what you want with. Hetzner also has some larger options available, for instance 2x4TB at EUR39 or 2x8TB at EUR54, both with 64GB of RAM. Bargain Corner Yes, you can find 10TB for $25/mo. It s hosted on ceph, by what appears to be mostly a single person (though with a lot of experience and a fair bit of transparency). You re not going to have the round-the-clock support experience as with rsync.net, nor its raidz3 level of redundancy but if you don t need that, there are quite a few options. Let s start with Lima Labs. Yes, 10TB is $25/mo, and they support sftp, rsync, borg, and even NFS mounts on storage backed by Ceph. The owner, Sam, seems to be a nice guy but the service isn t going to be on the scale of rsync.net or Hetzner. That may or may not be OK for your needs I mean, you can even get 1TB for $5/mo, so there are some fantastic deals to be had here. BorgBase does Borg hosting and borg hosting only. You can get 1TB for $6.67/mo or, for instance, 10TB for $53.46. They don t say much about their infrastructure and it s hard to get a read on the company, but for Borg backups, it could be a nice option. Bargain Corner Part 2: Seedboxes There s a market out there of companies offering BitTorrent seeding and downloading services. Typically, these services offer you Unix ssh access to a shell, give you a bunch of space on completely non-redundant drives (theory being that the data on them is transient), lots of bandwidth, for a low price. Some people use them for BitTorrent, others for media serving and such. If you are willing to take the lowest in drive redundancy, there are some deals to be had. Whatbox is a popular leader here, and has an extensive wiki with info. Or you can find some seedbox.io shared storage plans for instance, 12TB for $32.49/mo. But it s completely non-redundant drives. Seedbox has a partner company, Walker Servers, with some interesting deals; for instance, 4x8TB for EUR 52.45. Not bad for 24TB usable with RAID5 but Walker Servers is completely unknown to me and doesn t publish a phone number. So, YMMV. Conclusion I m sure I ve left out many quality options here, but hopefully this is enough to lay out a general lay of the land. Leave other suggestions in the comments.

16 May 2021

Russ Allbery: Review: A Desolation Called Peace

Review: A Desolation Called Peace, by Arkady Martine
Series: Teixcalaan #2
Publisher: Tor
Copyright: 2021
ISBN: 1-250-18648-X
Format: Kindle
Pages: 496
A Desolation Called Peace is a direct sequel to A Memory Called Empire and picks up shortly after that book's ending. It would completely spoil the first book and builds heavily on previous events. This is not a series to read out of order. It's nearly impossible to discuss anything about the plot of this book without at least minor spoilers for the previous book, so beware. If you've not read A Memory Called Empire, I highly recommend it, and you may want to skip this review until you have. Mahit Dzmare has returned to Lsel Station and escaped, mostly, the pull of the Teixcalaan Empire in all its seductive arrogance. That doesn't mean Lsel Station is happy to see her. The maneuverings of the station council were only a distant part of the complex political situation she was navigating at the Teixcalaanli capital. Now home, it is far harder to ignore powerful councilors who would be appalled by the decisions she made. The ambassador to a hated foreign empire does not have many allies. Yaotlek Nine Hibiscus, the empire's newest commander of commanders, is the spear the empire has thrust towards a newly-discovered alien threat. The aliens have already slaughtered all the inhabitants of a mining outpost for no obvious reason, and their captured communications are so strange as to provoke nausea in humans. Their cloaking technology makes the outcome of pitched warfare dangerously uncertain. Nine Hibiscus needs someone who can talk to aliens without mouths, and that means the Information Ministry. The Information Ministry means a newly promoted Three Seagrass, who is suffering from insomnia, desperately bored, and missing Mahit Dzmare. And who sees in Nine Hibiscus's summons an opportunity to address several of those problems at once. A Memory Called Empire had an SFnal premise and triggering plot machinery, but it was primarily a city political thriller. A Desolation Called Peace moves onto the more familiar SF ground of first contact with a very alien species, but Martine makes the unusual choice of revealing one of the secrets of the aliens to the reader at the start of the book. This keeps the reader's focus more on the political maneuvering than on the mystery, but with a classic first-contact communication problem as the motivating backdrop. That's only one of the threads of this book, though. Another is the unfinished business between Three Seagrass and Mahit Dzmare, and between Mahit Dzmare and the all-consuming culture of Teixcalaan. A third is the political education of a very exceptional boy, whose mere existence is a spoiler for A Memory Called Empire and therefore not something I will discuss in detail. And then there are the internal politics of Lsel Station, although I thought that was the least effective part of the book and never reached a satisfying conclusion. This is a lot to balance, and I think that's one of the reasons why A Desolation Called Peace doesn't replicate the magic that made me love A Memory Called Empire so much. Full-steam-ahead pacing with characters who are thinking on their feet and taking desperate risks has a glorious momentum. Here, there's too much going on (not to mention four major viewpoint characters) to maintain the same pace. Once Mahit and Three Seagrass get into the same room, there are moments that are as good as the highlights of A Memory Called Empire, but it's not as sustained as I was hoping for. This book also spends more time on Mahit and Three Seagrass's relationship, and despite liking both of the characters, this didn't entirely work for me. Martine uses them to make a subtle and powerful point about relationships across power gradients and the hurt that comes from someone trivializing a conflict that is central to your identity. It took me a while to understand the strength of Mahit's reaction, but it eventually felt right. But that fight wasn't what I was looking for in the book, and there was a bit too much of both of them failing (or refusing) to communicate for my taste. I appreciated what Martine was exploring, but personally I wanted a different sort of catharsis. That said, this is still a highly enjoyable book. Nine Hibiscus is a solid military SF character who is a good counterweight to the more devious approaches of the other characters. I enjoyed the subplot of the kid in the Teixcalaanli capital more than I expected, although it felt more like setup for future novels than critical to the plot of this one. And then there's Three Seagrass.
Three Seagrass always made decisions wholly and entire. All at once. choosing information as her aptitudes. Choosing the position of cultural liaison to the Lsel Ambassador. Choosing to trust her. choosing to come here, to take this assignment entirely, completely, and without pausing to look to see how deep the water was that she was leaping into.
Every word of this is true, and it's so much fun to read. Three Seagrass was a bit overshadowed in A Memory Called Empire, a supporting character in someone else's story. Here, she has moments where she can take the lead, and she's so delightfully different than Mahit. I loved every moment of her viewpoint. A Desolation Called Peace isn't as taut or as coherent as A Memory Called Empire. The plot sags in a few places, and I think there was a bit too much hopeless Lsel politics, nebulous alien horror, and injured silence between characters. But the high points are nearly as good as the high points of A Memory Called Empire and I adore these characters. If you liked the first book, I think you'll like this one too. More, please! Rating: 8 out of 10

Next.

Previous.